I0325 10:14:01.765590 7 e2e.go:129] Starting e2e run "c754aa9f-c2f2-45b6-800b-9be4963bdcca" on Ginkgo node 1 {"msg":"Test Suite starting","total":54,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1616667240 - Will randomize all specs Will run 54 of 5737 specs Mar 25 10:14:01.879: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:14:01.882: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 25 10:14:02.028: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 25 10:14:02.218: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 25 10:14:02.218: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 25 10:14:02.218: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 25 10:14:02.235: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 25 10:14:02.235: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 25 10:14:02.235: INFO: e2e test version: v1.21.0-beta.1 Mar 25 10:14:02.237: INFO: kube-apiserver version: v1.21.0-alpha.0 Mar 25 10:14:02.237: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:14:02.242: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should prevent NodePort collisions /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:14:02.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services Mar 25 10:14:02.450: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should prevent NodePort collisions /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440 STEP: creating service nodeport-collision-1 with type NodePort in namespace services-7135 STEP: creating service nodeport-collision-2 with conflicting NodePort STEP: deleting service nodeport-collision-1 to release NodePort STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort STEP: deleting service nodeport-collision-2 in namespace services-7135 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:14:04.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7135" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":54,"completed":1,"skipped":110,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should release NodePorts on delete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:14:04.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should release NodePorts on delete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561 STEP: creating service nodeport-reuse with type NodePort in namespace services-3272 STEP: deleting original service nodeport-reuse Mar 25 10:14:05.523: INFO: Creating new host exec pod Mar 25 10:14:05.684: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:14:07.702: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:14:09.840: INFO: The status of Pod hostexec is Running (Ready = true) Mar 25 10:14:09.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-3272 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :32024' | tail -n +2 | grep LISTEN' Mar 25 10:14:25.282: INFO: stderr: "+ ss -ant46 'sport = :32024'\n+ + tail -ngrep +2 LISTEN\n\n" Mar 25 10:14:25.282: INFO: stdout: "" STEP: creating service nodeport-reuse with same NodePort 32024 STEP: deleting service nodeport-reuse in namespace services-3272 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:14:27.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3272" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:22.797 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should release NodePorts on delete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561 ------------------------------ {"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":54,"completed":2,"skipped":261,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should create endpoints for unready pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:14:27.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should create endpoints for unready pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624 STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod] STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-0a31f717-b886-4b1b-b5a1-14c9fa65c0de] STEP: Verifying pods for RC slow-terminating-unready-pod Mar 25 10:14:27.708: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: trying to dial each unique pod Mar 25 10:14:33.748: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-wnr67]: "NOW: 2021-03-25 10:14:33.747985176 +0000 UTC m=+1.448330961", 1 of 1 required successes so far STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-9715.svc.cluster.local Mar 25 10:14:33.748: INFO: Creating new exec pod Mar 25 10:14:40.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-9715 exec execpod-wjr78 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9715.svc.cluster.local:80/' Mar 25 10:14:40.445: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-9715.svc.cluster.local:80/\n" Mar 25 10:14:40.445: INFO: stdout: "NOW: 2021-03-25 10:14:40.437098273 +0000 UTC m=+8.137444062" STEP: Scaling down replication controller to zero STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-9715 to 0 STEP: Update service to not tolerate unready services STEP: Check if pod is unreachable Mar 25 10:14:45.614: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-9715 exec execpod-wjr78 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9715.svc.cluster.local:80/; test "$?" -ne "0"' Mar 25 10:14:47.013: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-9715.svc.cluster.local:80/\n+ test 7 -ne 0\n" Mar 25 10:14:47.013: INFO: stdout: "" STEP: Update service to tolerate unready services again STEP: Check if terminating pod is available through service Mar 25 10:14:47.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-9715 exec execpod-wjr78 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9715.svc.cluster.local:80/' Mar 25 10:14:48.300: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-9715.svc.cluster.local:80/\n" Mar 25 10:14:48.300: INFO: stdout: "NOW: 2021-03-25 10:14:48.289487123 +0000 UTC m=+15.989832903" STEP: Remove pods immediately STEP: stopping RC slow-terminating-unready-pod in namespace services-9715 STEP: deleting service tolerate-unready in namespace services-9715 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:14:51.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9715" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:23.796 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create endpoints for unready pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624 ------------------------------ {"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":54,"completed":3,"skipped":298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:14:51.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should support basic nodePort: udp functionality /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387 STEP: Performing setup for networking test in namespace nettest-7494 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 10:14:51.728: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 10:14:52.311: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:14:54.734: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:14:56.481: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:14:58.505: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:15:00.327: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:15:02.391: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:15:04.316: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:15:06.337: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:15:08.331: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:15:10.487: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:15:12.463: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 10:15:12.468: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 10:15:22.016: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 10:15:22.016: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 10:15:23.452: INFO: Service node-port-service in namespace nettest-7494 found. Mar 25 10:15:25.062: INFO: Service session-affinity-service in namespace nettest-7494 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 10:15:26.284: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 10:15:27.673: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(udp) 172.18.0.17 (node) --> 172.18.0.17:32742 (nodeIP) and getting ALL host endpoints Mar 25 10:15:27.828: INFO: Going to poll 172.18.0.17 on port 32742 at least 0 times, with a maximum of 34 tries before failing Mar 25 10:15:28.058: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 32742 | grep -v '^\s*$'] Namespace:nettest-7494 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:15:28.058: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:15:29.286: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-1]) Mar 25 10:15:31.290: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 32742 | grep -v '^\s*$'] Namespace:nettest-7494 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:15:31.290: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:15:32.380: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:15:32.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-7494" for this suite. • [SLOW TEST:41.189 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should support basic nodePort: udp functionality /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality","total":54,"completed":4,"skipped":340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should complete a service status lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2212 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:15:32.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2212 STEP: creating a Service STEP: watching for the Service to be added Mar 25 10:15:33.501: INFO: Found Service test-service-xg4gn in namespace services-6783 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] Mar 25 10:15:33.502: INFO: Service test-service-xg4gn created STEP: Getting /status Mar 25 10:15:33.506: INFO: Service test-service-xg4gn has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched Mar 25 10:15:33.692: INFO: observed Service test-service-xg4gn in namespace services-6783 with annotations: map[] & LoadBalancer: {[]} Mar 25 10:15:33.692: INFO: Found Service test-service-xg4gn in namespace services-6783 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Mar 25 10:15:33.692: INFO: Service test-service-xg4gn has service status patched STEP: updating the ServiceStatus Mar 25 10:15:34.011: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated Mar 25 10:15:34.013: INFO: Observed Service test-service-xg4gn in namespace services-6783 with annotations: map[] & Conditions: {[]} Mar 25 10:15:34.013: INFO: Observed event: &Service{ObjectMeta:{test-service-xg4gn services-6783 b760e23c-6eaf-4836-a7b4-f1a12f2151a9 1061789 0 2021-03-25 10:15:33 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2021-03-25 10:15:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.96.79.91,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:nil,ClusterIPs:[10.96.79.91],IPFamilies:[],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Mar 25 10:15:34.014: INFO: Found Service test-service-xg4gn in namespace services-6783 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Mar 25 10:15:34.014: INFO: Service test-service-xg4gn has service status updated STEP: patching the service STEP: watching for the Service to be patched Mar 25 10:15:34.257: INFO: observed Service test-service-xg4gn in namespace services-6783 with labels: map[test-service-static:true] Mar 25 10:15:34.257: INFO: observed Service test-service-xg4gn in namespace services-6783 with labels: map[test-service-static:true] Mar 25 10:15:34.257: INFO: observed Service test-service-xg4gn in namespace services-6783 with labels: map[test-service-static:true] Mar 25 10:15:34.257: INFO: Found Service test-service-xg4gn in namespace services-6783 with labels: map[test-service:patched test-service-static:true] Mar 25 10:15:34.257: INFO: Service test-service-xg4gn patched STEP: deleting the service STEP: watching for the Service to be deleted Mar 25 10:15:34.454: INFO: Observed event: ADDED Mar 25 10:15:34.454: INFO: Observed event: MODIFIED Mar 25 10:15:34.454: INFO: Observed event: MODIFIED Mar 25 10:15:34.454: INFO: Observed event: MODIFIED Mar 25 10:15:34.454: INFO: Found Service test-service-xg4gn in namespace services-6783 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Mar 25 10:15:34.454: INFO: Service test-service-xg4gn deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:15:34.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6783" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •{"msg":"PASSED [sig-network] Services should complete a service status lifecycle","total":54,"completed":5,"skipped":389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should implement service.kubernetes.io/service-proxy-name /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:15:34.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should implement service.kubernetes.io/service-proxy-name /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865 STEP: creating service-disabled in namespace services-2221 STEP: creating service service-proxy-disabled in namespace services-2221 STEP: creating replication controller service-proxy-disabled in namespace services-2221 I0325 10:15:35.987683 7 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-2221, replica count: 3 I0325 10:15:39.039267 7 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:15:42.039936 7 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:15:45.040964 7 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:15:48.042129 7 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:15:51.042954 7 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: creating service in namespace services-2221 STEP: creating service service-proxy-toggled in namespace services-2221 STEP: creating replication controller service-proxy-toggled in namespace services-2221 I0325 10:15:52.373301 7 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-2221, replica count: 3 I0325 10:15:55.424307 7 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:15:58.424547 7 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:16:01.424678 7 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:16:04.425598 7 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:16:07.426775 7 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:16:10.427553 7 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: verifying service is up Mar 25 10:16:10.429: INFO: Creating new host exec pod Mar 25 10:16:10.522: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:16:12.836: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:16:14.607: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:16:16.536: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:16:19.631: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:16:20.562: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:16:22.663: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Mar 25 10:16:22.663: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Mar 25 10:16:31.056: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.7.203:80 2>&1 || true; echo; done" in pod services-2221/verify-service-up-host-exec-pod Mar 25 10:16:31.056: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2221 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.7.203:80 2>&1 || true; echo; done' Mar 25 10:16:32.477: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n" Mar 25 10:16:32.478: INFO: stdout: "service-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\n" Mar 25 10:16:32.478: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.7.203:80 2>&1 || true; echo; done" in pod services-2221/verify-service-up-exec-pod-cp6z7 Mar 25 10:16:32.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2221 exec verify-service-up-exec-pod-cp6z7 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.7.203:80 2>&1 || true; echo; done' Mar 25 10:16:32.991: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n" Mar 25 10:16:32.991: INFO: stdout: "service-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2221 STEP: Deleting pod verify-service-up-exec-pod-cp6z7 in namespace services-2221 STEP: verifying service-disabled is not up Mar 25 10:16:34.615: INFO: Creating new host exec pod Mar 25 10:16:35.481: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:16:37.964: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:16:39.788: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:16:41.662: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:16:43.769: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:16:46.143: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Mar 25 10:16:46.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2221 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.102.186:80 && echo service-down-failed' Mar 25 10:16:48.479: INFO: rc: 28 Mar 25 10:16:48.480: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.102.186:80 && echo service-down-failed" in pod services-2221/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2221 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.102.186:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.96.102.186:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2221 STEP: adding service-proxy-name label STEP: verifying service is not up Mar 25 10:16:51.233: INFO: Creating new host exec pod Mar 25 10:16:52.349: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:16:54.589: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:16:56.756: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:16:58.445: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:00.848: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:02.478: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Mar 25 10:17:02.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2221 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.7.203:80 && echo service-down-failed' Mar 25 10:17:04.964: INFO: rc: 28 Mar 25 10:17:04.964: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.7.203:80 && echo service-down-failed" in pod services-2221/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2221 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.7.203:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.96.7.203:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2221 STEP: removing service-proxy-name annotation STEP: verifying service is up Mar 25 10:17:06.356: INFO: Creating new host exec pod Mar 25 10:17:07.346: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:09.357: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:11.745: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:13.433: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:15.348: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Mar 25 10:17:15.348: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Mar 25 10:17:24.044: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.7.203:80 2>&1 || true; echo; done" in pod services-2221/verify-service-up-host-exec-pod Mar 25 10:17:24.045: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2221 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.7.203:80 2>&1 || true; echo; done' Mar 25 10:17:24.535: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n" Mar 25 10:17:24.535: INFO: stdout: "service-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\n" Mar 25 10:17:24.535: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.7.203:80 2>&1 || true; echo; done" in pod services-2221/verify-service-up-exec-pod-zgvmh Mar 25 10:17:24.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2221 exec verify-service-up-exec-pod-zgvmh -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.7.203:80 2>&1 || true; echo; done' Mar 25 10:17:24.928: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.7.203:80\n+ echo\n" Mar 25 10:17:24.928: INFO: stdout: "service-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-x664g\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-x664g\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-4j8sq\nservice-proxy-toggled-8hvpj\nservice-proxy-toggled-x664g\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2221 STEP: Deleting pod verify-service-up-exec-pod-zgvmh in namespace services-2221 STEP: verifying service-disabled is still not up Mar 25 10:17:27.533: INFO: Creating new host exec pod Mar 25 10:17:28.476: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:30.791: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:32.684: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:34.523: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:36.695: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:38.823: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Mar 25 10:17:38.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2221 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.102.186:80 && echo service-down-failed' Mar 25 10:17:42.068: INFO: rc: 28 Mar 25 10:17:42.068: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.102.186:80 && echo service-down-failed" in pod services-2221/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2221 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.102.186:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.96.102.186:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2221 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:17:42.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2221" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:127.820 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should implement service.kubernetes.io/service-proxy-name /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865 ------------------------------ {"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":54,"completed":6,"skipped":444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] ESIPP [Slow] should work for type=NodePort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927 [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:17:42.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 Mar 25 10:17:43.059: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:17:43.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-5929" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.295 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should work for type=NodePort [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:492 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:17:43.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for service endpoints using hostNetwork /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:492 STEP: Performing setup for networking test in namespace nettest-8009 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 10:17:43.798: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 10:17:44.040: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:46.132: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:48.068: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:17:50.266: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:17:52.049: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:17:54.632: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:17:56.439: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:17:58.087: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:18:00.716: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:18:02.291: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:18:04.056: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:18:06.528: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:18:08.200: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 10:18:08.343: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 10:18:22.431: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 10:18:22.432: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 10:18:24.578: INFO: Service node-port-service in namespace nettest-8009 found. Mar 25 10:18:25.182: INFO: Service session-affinity-service in namespace nettest-8009 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 10:18:26.207: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 10:18:27.645: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: pod-Service(hostNetwork): http STEP: dialing(http) test-container-pod --> 10.96.79.176:80 (config.clusterIP) Mar 25 10:18:27.829: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=hostname&protocol=http&host=10.96.79.176&port=80&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:18:27.829: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:18:28.281: INFO: Waiting for responses: map[latest-worker2:{}] Mar 25 10:18:30.548: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=hostname&protocol=http&host=10.96.79.176&port=80&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:18:30.548: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:18:30.901: INFO: Waiting for responses: map[latest-worker2:{}] Mar 25 10:18:32.904: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=hostname&protocol=http&host=10.96.79.176&port=80&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:18:32.905: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:18:33.014: INFO: Waiting for responses: map[latest-worker2:{}] Mar 25 10:18:35.017: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=hostname&protocol=http&host=10.96.79.176&port=80&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:18:35.017: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:18:35.105: INFO: Waiting for responses: map[latest-worker2:{}] Mar 25 10:18:37.108: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=hostname&protocol=http&host=10.96.79.176&port=80&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:18:37.109: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:18:37.557: INFO: Waiting for responses: map[latest-worker2:{}] Mar 25 10:18:39.601: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=hostname&protocol=http&host=10.96.79.176&port=80&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:18:39.601: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:18:39.814: INFO: Waiting for responses: map[latest-worker2:{}] Mar 25 10:18:42.081: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=hostname&protocol=http&host=10.96.79.176&port=80&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:18:42.081: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:18:42.432: INFO: Waiting for responses: map[latest-worker2:{}] Mar 25 10:18:44.440: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=hostname&protocol=http&host=10.96.79.176&port=80&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:18:44.440: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:18:44.854: INFO: Waiting for responses: map[latest-worker2:{}] Mar 25 10:18:46.871: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=hostname&protocol=http&host=10.96.79.176&port=80&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:18:46.871: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:18:46.963: INFO: Waiting for responses: map[latest-worker2:{}] Mar 25 10:18:48.966: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=hostname&protocol=http&host=10.96.79.176&port=80&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:18:48.966: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:18:49.075: INFO: Waiting for responses: map[latest-worker2:{}] Mar 25 10:18:51.080: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=hostname&protocol=http&host=10.96.79.176&port=80&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:18:51.080: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:18:51.206: INFO: Waiting for responses: map[] Mar 25 10:18:51.206: INFO: reached 10.96.79.176 after 10/34 tries STEP: dialing(http) test-container-pod --> 172.18.0.17:32435 (nodeIP) Mar 25 10:18:51.209: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32435&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:18:51.209: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:18:51.302: INFO: Waiting for responses: map[latest-worker2:{}] Mar 25 10:18:53.314: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32435&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:18:53.314: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:18:53.415: INFO: Waiting for responses: map[latest-worker2:{}] Mar 25 10:18:55.419: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32435&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:18:55.419: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:18:55.513: INFO: Waiting for responses: map[latest-worker2:{}] Mar 25 10:18:57.554: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32435&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:18:57.555: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:18:57.972: INFO: Waiting for responses: map[] Mar 25 10:18:57.972: INFO: reached 172.18.0.17 after 3/34 tries STEP: node-Service(hostNetwork): http STEP: dialing(http) 172.18.0.17 (node) --> 10.96.79.176:80 (config.clusterIP) Mar 25 10:18:57.972: INFO: Going to poll 10.96.79.176 on port 80 at least 0 times, with a maximum of 34 tries before failing Mar 25 10:18:58.086: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.96.79.176:80/hostName | grep -v '^\s*$'] Namespace:nettest-8009 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:18:58.086: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:18:58.179: INFO: Waiting for [latest-worker] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker2]) Mar 25 10:19:00.189: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.96.79.176:80/hostName | grep -v '^\s*$'] Namespace:nettest-8009 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:00.189: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:00.280: INFO: Waiting for [latest-worker] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker2]) Mar 25 10:19:02.285: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.96.79.176:80/hostName | grep -v '^\s*$'] Namespace:nettest-8009 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:02.285: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:02.401: INFO: Waiting for [latest-worker] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker2]) Mar 25 10:19:04.406: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.96.79.176:80/hostName | grep -v '^\s*$'] Namespace:nettest-8009 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:04.406: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:04.509: INFO: Found all 2 expected endpoints: [latest-worker latest-worker2] STEP: dialing(http) 172.18.0.17 (node) --> 172.18.0.17:32435 (nodeIP) Mar 25 10:19:04.509: INFO: Going to poll 172.18.0.17 on port 32435 at least 0 times, with a maximum of 34 tries before failing Mar 25 10:19:04.512: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32435/hostName | grep -v '^\s*$'] Namespace:nettest-8009 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:04.512: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:04.606: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 25 10:19:06.698: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32435/hostName | grep -v '^\s*$'] Namespace:nettest-8009 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:06.698: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:06.962: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 25 10:19:08.966: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32435/hostName | grep -v '^\s*$'] Namespace:nettest-8009 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:08.966: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:09.095: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 25 10:19:11.213: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32435/hostName | grep -v '^\s*$'] Namespace:nettest-8009 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:11.213: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:11.316: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 25 10:19:13.386: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32435/hostName | grep -v '^\s*$'] Namespace:nettest-8009 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:13.386: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:13.558: INFO: Found all 2 expected endpoints: [latest-worker latest-worker2] STEP: node-Service(hostNetwork): udp STEP: dialing(udp) 172.18.0.17 (node) --> 10.96.79.176:90 (config.clusterIP) Mar 25 10:19:13.558: INFO: Going to poll 10.96.79.176 on port 90 at least 0 times, with a maximum of 34 tries before failing Mar 25 10:19:13.562: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.79.176 90 | grep -v '^\s*$'] Namespace:nettest-8009 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:13.562: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:14.653: INFO: Waiting for [latest-worker] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker2]) Mar 25 10:19:17.273: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.79.176 90 | grep -v '^\s*$'] Namespace:nettest-8009 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:17.273: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:18.951: INFO: Found all 2 expected endpoints: [latest-worker latest-worker2] STEP: dialing(udp) 172.18.0.17 (node) --> 172.18.0.17:31760 (nodeIP) Mar 25 10:19:18.951: INFO: Going to poll 172.18.0.17 on port 31760 at least 0 times, with a maximum of 34 tries before failing Mar 25 10:19:19.028: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 31760 | grep -v '^\s*$'] Namespace:nettest-8009 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:19.028: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:20.282: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 25 10:19:22.531: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 31760 | grep -v '^\s*$'] Namespace:nettest-8009 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:22.531: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:23.663: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 25 10:19:25.667: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 31760 | grep -v '^\s*$'] Namespace:nettest-8009 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:25.667: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:26.871: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 25 10:19:28.874: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 31760 | grep -v '^\s*$'] Namespace:nettest-8009 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:28.874: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:29.955: INFO: Found all 2 expected endpoints: [latest-worker latest-worker2] STEP: handle large requests: http(hostNetwork) STEP: dialing(http) test-container-pod --> 10.96.79.176:80 (config.clusterIP) Mar 25 10:19:29.958: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=echo?msg=42424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242&protocol=http&host=10.96.79.176&port=80&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:29.958: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:30.032: INFO: Waiting for responses: map[] Mar 25 10:19:30.033: INFO: reached 10.96.79.176 after 0/34 tries STEP: handle large requests: udp(hostNetwork) STEP: dialing(udp) test-container-pod --> 10.96.79.176:90 (config.clusterIP) Mar 25 10:19:30.035: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=echo%20nooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo&protocol=udp&host=10.96.79.176&port=90&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:30.035: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:35.133: INFO: Waiting for responses: map[nooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo:{}] Mar 25 10:19:37.136: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.75:9080/dial?request=echo%20nooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo&protocol=udp&host=10.96.79.176&port=90&tries=1'] Namespace:nettest-8009 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:19:37.137: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:19:37.242: INFO: Waiting for responses: map[] Mar 25 10:19:37.242: INFO: reached 10.96.79.176 after 1/34 tries [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:19:37.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-8009" for this suite. • [SLOW TEST:114.194 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for service endpoints using hostNetwork /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:492 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork","total":54,"completed":7,"skipped":498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Provider:GCE] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:19:37.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Provider:GCE] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 Mar 25 10:19:37.623: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:19:37.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4984" for this suite. S [SKIPPING] [0.380 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Provider:GCE] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203 [BeforeEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:19:37.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename conntrack STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96 [It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203 STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-5575 STEP: creating a client pod for probing the service svc-udp Mar 25 10:19:38.309: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:19:40.313: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:19:42.548: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:19:44.423: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:19:46.458: INFO: The status of Pod pod-client is Running (Ready = true) Mar 25 10:19:46.713: INFO: Pod client logs: Thu Mar 25 10:19:44 UTC 2021 Thu Mar 25 10:19:44 UTC 2021 Try: 1 Thu Mar 25 10:19:44 UTC 2021 Try: 2 Thu Mar 25 10:19:44 UTC 2021 Try: 3 Thu Mar 25 10:19:44 UTC 2021 Try: 4 Thu Mar 25 10:19:44 UTC 2021 Try: 5 Thu Mar 25 10:19:44 UTC 2021 Try: 6 Thu Mar 25 10:19:44 UTC 2021 Try: 7 STEP: creating a backend pod pod-server-1 for the service svc-udp Mar 25 10:19:47.041: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:19:49.104: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:19:52.412: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:19:53.113: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:19:55.499: INFO: The status of Pod pod-server-1 is Running (Ready = true) STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-5575 to expose endpoints map[pod-server-1:[80]] Mar 25 10:19:55.751: INFO: successfully validated that service svc-udp in namespace conntrack-5575 exposes endpoints map[pod-server-1:[80]] STEP: checking client pod connected to the backend 1 on Node IP 172.18.0.15 STEP: creating a second backend pod pod-server-2 for the service svc-udp Mar 25 10:20:06.327: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:20:08.986: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:20:10.891: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:20:12.513: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:20:14.429: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:20:16.431: INFO: The status of Pod pod-server-2 is Running (Ready = true) Mar 25 10:20:16.435: INFO: Cleaning up pod-server-1 pod Mar 25 10:20:16.971: INFO: Waiting for pod pod-server-1 to disappear Mar 25 10:20:17.042: INFO: Pod pod-server-1 no longer exists STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-5575 to expose endpoints map[pod-server-2:[80]] Mar 25 10:20:17.236: INFO: successfully validated that service svc-udp in namespace conntrack-5575 exposes endpoints map[pod-server-2:[80]] STEP: checking client pod connected to the backend 2 on Node IP 172.18.0.15 [AfterEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:20:27.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "conntrack-5575" for this suite. • [SLOW TEST:49.858 seconds] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to preserve UDP traffic when server pod cycles for a ClusterIP service /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203 ------------------------------ {"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":54,"completed":8,"skipped":735,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:20:27.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should update nodePort: udp [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397 STEP: Performing setup for networking test in namespace nettest-3101 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 10:20:27.890: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 10:20:28.044: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:20:30.154: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:20:32.372: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:20:34.215: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:20:36.047: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:20:38.351: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:20:40.292: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:20:42.048: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:20:44.048: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:20:46.047: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:20:48.153: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:20:50.855: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 10:20:50.861: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 10:21:03.658: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 10:21:03.658: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 10:21:04.304: INFO: Service node-port-service in namespace nettest-3101 found. Mar 25 10:21:04.970: INFO: Service session-affinity-service in namespace nettest-3101 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 10:21:06.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:07.139: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:08.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:09.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:10.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:11.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:12.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:13.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:14.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:15.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:16.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:17.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:18.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:19.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:20.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:21.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:22.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:23.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:24.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:25.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:26.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:27.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:28.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:29.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:31.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:32.139: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:33.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:35.139: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:35.186: INFO: Waiting for amount of service:node-port-service endpoints to be 2 Mar 25 10:21:35.409: FAIL: failed to validate endpoints for service node-port-service in namespace: nettest-3101 Unexpected error: <*errors.errorString | 0xc00025e250>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc00173c7e0, 0xc003cbec90) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:802 +0x525 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0008aa420, 0xc001a6d190, 0x1, 0x1, 0xc00062e800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:129 +0x165 k8s.io/kubernetes/test/e2e/network.glob..func20.6.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:398 +0x6d k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00294cf00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00294cf00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00294cf00, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "nettest-3101". STEP: Found 17 events. Mar 25 10:21:35.450: INFO: At 2021-03-25 10:20:27 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-3101/netserver-0 to latest-worker Mar 25 10:21:35.450: INFO: At 2021-03-25 10:20:28 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-3101/netserver-1 to latest-worker2 Mar 25 10:21:35.450: INFO: At 2021-03-25 10:20:29 +0000 UTC - event for netserver-0: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:21:35.450: INFO: At 2021-03-25 10:20:30 +0000 UTC - event for netserver-1: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:21:35.450: INFO: At 2021-03-25 10:20:31 +0000 UTC - event for netserver-0: {kubelet latest-worker} Created: Created container webserver Mar 25 10:21:35.450: INFO: At 2021-03-25 10:20:32 +0000 UTC - event for netserver-0: {kubelet latest-worker} Started: Started container webserver Mar 25 10:21:35.450: INFO: At 2021-03-25 10:20:32 +0000 UTC - event for netserver-0: {taint-controller } TaintManagerEviction: Marking for deletion Pod nettest-3101/netserver-0 Mar 25 10:21:35.450: INFO: At 2021-03-25 10:20:32 +0000 UTC - event for netserver-1: {kubelet latest-worker2} Created: Created container webserver Mar 25 10:21:35.450: INFO: At 2021-03-25 10:20:33 +0000 UTC - event for netserver-1: {kubelet latest-worker2} Started: Started container webserver Mar 25 10:21:35.450: INFO: At 2021-03-25 10:20:51 +0000 UTC - event for test-container-pod: {default-scheduler } Scheduled: Successfully assigned nettest-3101/test-container-pod to latest-worker Mar 25 10:21:35.450: INFO: At 2021-03-25 10:20:52 +0000 UTC - event for host-test-container-pod: {default-scheduler } Scheduled: Successfully assigned nettest-3101/host-test-container-pod to latest-worker2 Mar 25 10:21:35.450: INFO: At 2021-03-25 10:20:54 +0000 UTC - event for host-test-container-pod: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:21:35.450: INFO: At 2021-03-25 10:20:55 +0000 UTC - event for test-container-pod: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:21:35.450: INFO: At 2021-03-25 10:20:57 +0000 UTC - event for host-test-container-pod: {kubelet latest-worker2} Created: Created container agnhost-container Mar 25 10:21:35.450: INFO: At 2021-03-25 10:20:58 +0000 UTC - event for host-test-container-pod: {kubelet latest-worker2} Started: Started container agnhost-container Mar 25 10:21:35.450: INFO: At 2021-03-25 10:20:59 +0000 UTC - event for test-container-pod: {kubelet latest-worker} Created: Created container webserver Mar 25 10:21:35.450: INFO: At 2021-03-25 10:21:00 +0000 UTC - event for test-container-pod: {kubelet latest-worker} Started: Started container webserver Mar 25 10:21:35.510: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 10:21:35.510: INFO: host-test-container-pod latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:20:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:20:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:20:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:20:51 +0000 UTC }] Mar 25 10:21:35.510: INFO: netserver-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:20:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:20:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:20:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:20:27 +0000 UTC }] Mar 25 10:21:35.510: INFO: netserver-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:20:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:20:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:20:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:20:27 +0000 UTC }] Mar 25 10:21:35.510: INFO: test-container-pod latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:20:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:21:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:21:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:20:51 +0000 UTC }] Mar 25 10:21:35.510: INFO: Mar 25 10:21:35.513: INFO: Logging node info for node latest-control-plane Mar 25 10:21:35.563: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1063172 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:18:40 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:18:40 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:18:40 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:18:40 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:21:35.564: INFO: Logging kubelet events for node latest-control-plane Mar 25 10:21:35.729: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 10:21:35.736: INFO: coredns-74ff55c5b-rfzq5 started at 2021-03-24 19:40:49 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:35.736: INFO: Container coredns ready: true, restart count 0 Mar 25 10:21:35.736: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:35.736: INFO: Container etcd ready: true, restart count 0 Mar 25 10:21:35.736: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:35.736: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 10:21:35.736: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:35.736: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:21:35.736: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:35.736: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:21:35.736: INFO: coredns-74ff55c5b-smtp9 started at 2021-03-24 19:40:48 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:35.736: INFO: Container coredns ready: true, restart count 0 Mar 25 10:21:35.736: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:35.736: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 10:21:35.736: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:35.736: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 10:21:35.736: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:35.736: INFO: Container local-path-provisioner ready: true, restart count 0 W0325 10:21:35.761472 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:21:36.013: INFO: Latency metrics for node latest-control-plane Mar 25 10:21:36.013: INFO: Logging node info for node latest-worker Mar 25 10:21:36.077: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1063961 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {e2e.test Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:00 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:00 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:00 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:17:00 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:21:36.078: INFO: Logging kubelet events for node latest-worker Mar 25 10:21:36.162: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 10:21:36.170: INFO: pod-projected-secrets-1009b1e1-c14d-42da-a897-cd7251502479 started at 2021-03-25 10:21:10 +0000 UTC (0+3 container statuses recorded) Mar 25 10:21:36.170: INFO: Container creates-volume-test ready: true, restart count 0 Mar 25 10:21:36.170: INFO: Container dels-volume-test ready: true, restart count 0 Mar 25 10:21:36.170: INFO: Container upds-volume-test ready: true, restart count 0 Mar 25 10:21:36.170: INFO: daemon-set-glx65 started at 2021-03-25 10:21:24 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:36.170: INFO: Container app ready: true, restart count 0 Mar 25 10:21:36.170: INFO: iperf2-clients-2wd7j started at 2021-03-25 10:21:25 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:36.170: INFO: Container iperf2-client ready: false, restart count 0 Mar 25 10:21:36.170: INFO: test-container-pod started at 2021-03-25 10:20:51 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:36.170: INFO: Container webserver ready: true, restart count 0 Mar 25 10:21:36.170: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:36.170: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:21:36.170: INFO: kindnet-485hg started at 2021-03-25 10:20:57 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:36.170: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:21:36.170: INFO: netserver-0 started at 2021-03-25 10:20:27 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:36.170: INFO: Container webserver ready: true, restart count 0 W0325 10:21:36.250149 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:21:36.411: INFO: Latency metrics for node latest-worker Mar 25 10:21:36.411: INFO: Logging node info for node latest-worker2 Mar 25 10:21:36.669: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1062584 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:56:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 09:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 10:00:17 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:10 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:10 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:17:10 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:17:10 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:21:36.669: INFO: Logging kubelet events for node latest-worker2 Mar 25 10:21:36.672: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 10:21:37.121: INFO: rand-non-local-vs7rv started at 2021-03-25 09:56:22 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:37.121: INFO: Container c ready: false, restart count 0 Mar 25 10:21:37.121: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:37.121: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:21:37.121: INFO: host-test-container-pod started at 2021-03-25 10:20:52 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:37.121: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 10:21:37.121: INFO: iperf2-server-deployment-7cd557866b-t5tk8 started at 2021-03-25 10:21:16 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:37.121: INFO: Container iperf2-server ready: true, restart count 0 Mar 25 10:21:37.121: INFO: no-snat-test6d9cp started at (0+0 container statuses recorded) Mar 25 10:21:37.121: INFO: netserver-1 started at 2021-03-25 10:20:28 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:37.121: INFO: Container webserver ready: true, restart count 0 Mar 25 10:21:37.121: INFO: iperf2-clients-7vxql started at 2021-03-25 10:21:24 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:37.121: INFO: Container iperf2-client ready: true, restart count 0 Mar 25 10:21:37.121: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:37.121: INFO: Container volume-tester ready: false, restart count 0 Mar 25 10:21:37.121: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:37.121: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:21:37.121: INFO: daemon-set-ltnqm started at 2021-03-25 10:21:23 +0000 UTC (0+1 container statuses recorded) Mar 25 10:21:37.121: INFO: Container app ready: true, restart count 0 W0325 10:21:37.532367 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:21:37.665: INFO: Latency metrics for node latest-worker2 Mar 25 10:21:37.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-3101" for this suite. • Failure [70.172 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should update nodePort: udp [Slow] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397 Mar 25 10:21:35.409: failed to validate endpoints for service node-port-service in namespace: nettest-3101 Unexpected error: <*errors.errorString | 0xc00025e250>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:802 ------------------------------ {"msg":"FAILED [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","total":54,"completed":8,"skipped":800,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for pod-Service(hostNetwork): udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:473 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:21:37.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for pod-Service(hostNetwork): udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:473 Mar 25 10:21:38.929: INFO: skip because pods can not reach the endpoint in the same host if using UDP and hostNetwork #95565 [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:21:38.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-755" for this suite. S [SKIPPING] [2.345 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for pod-Service(hostNetwork): udp [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:473 skip because pods can not reach the endpoint in the same host if using UDP and hostNetwork #95565 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for pod-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:21:40.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for pod-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153 STEP: Performing setup for networking test in namespace nettest-9062 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 10:21:41.880: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 10:21:43.082: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:21:45.286: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:21:47.633: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:21:49.357: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:21:51.413: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:21:53.724: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:21:55.359: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:21:57.385: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:21:59.432: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:22:01.304: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:22:03.458: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:22:05.123: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:22:07.085: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:22:09.831: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 10:22:09.905: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 10:22:18.646: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 10:22:18.646: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 10:22:21.819: INFO: Service node-port-service in namespace nettest-9062 found. Mar 25 10:22:22.983: INFO: Service session-affinity-service in namespace nettest-9062 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 10:22:23.986: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 10:22:25.447: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(http) test-container-pod --> 10.96.45.91:80 (config.clusterIP) Mar 25 10:22:25.495: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.17:9080/dial?request=hostname&protocol=http&host=10.96.45.91&port=80&tries=1'] Namespace:nettest-9062 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:22:25.495: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:22:25.636: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:22:27.645: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.17:9080/dial?request=hostname&protocol=http&host=10.96.45.91&port=80&tries=1'] Namespace:nettest-9062 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:22:27.645: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:22:27.793: INFO: Waiting for responses: map[] Mar 25 10:22:27.793: INFO: reached 10.96.45.91 after 1/34 tries STEP: dialing(http) test-container-pod --> 172.18.0.17:32714 (nodeIP) Mar 25 10:22:27.796: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.17:9080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32714&tries=1'] Namespace:nettest-9062 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:22:27.796: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:22:27.895: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:22:29.899: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.17:9080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32714&tries=1'] Namespace:nettest-9062 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:22:29.899: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:22:30.031: INFO: Waiting for responses: map[] Mar 25 10:22:30.031: INFO: reached 172.18.0.17 after 1/34 tries [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:22:30.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-9062" for this suite. • [SLOW TEST:50.014 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for pod-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: http","total":54,"completed":9,"skipped":1381,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:22:30.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for client IP based session affinity: http [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416 STEP: Performing setup for networking test in namespace nettest-8888 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 10:22:30.353: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 10:22:31.239: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:22:33.381: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:22:35.297: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:22:37.735: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:22:39.341: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:22:41.653: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:22:43.289: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:22:45.720: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:22:47.353: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:22:49.451: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:22:51.242: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:22:53.465: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:22:55.241: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:22:57.287: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:22:59.250: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 10:22:59.253: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 10:23:01.282: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 10:23:09.445: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 10:23:09.445: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 10:23:10.408: INFO: Service node-port-service in namespace nettest-8888 found. Mar 25 10:23:12.606: INFO: Service session-affinity-service in namespace nettest-8888 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 10:23:14.024: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 10:23:15.370: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(http) test-container-pod --> 10.96.22.204:80 Mar 25 10:23:16.012: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.94:9080/dial?request=hostName&protocol=http&host=10.96.22.204&port=80&tries=1'] Namespace:nettest-8888 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:23:16.012: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:23:16.181: INFO: Tries: 10, in try: 0, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-8888, hostIp: 172.18.0.17, podIp: 10.244.2.94, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC }]" } Mar 25 10:23:18.594: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.94:9080/dial?request=hostName&protocol=http&host=10.96.22.204&port=80&tries=1'] Namespace:nettest-8888 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:23:18.594: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:23:19.786: INFO: Tries: 10, in try: 1, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-8888, hostIp: 172.18.0.17, podIp: 10.244.2.94, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC }]" } Mar 25 10:23:22.310: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.94:9080/dial?request=hostName&protocol=http&host=10.96.22.204&port=80&tries=1'] Namespace:nettest-8888 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:23:22.310: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:23:22.875: INFO: Tries: 10, in try: 2, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-8888, hostIp: 172.18.0.17, podIp: 10.244.2.94, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC }]" } Mar 25 10:23:25.852: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.94:9080/dial?request=hostName&protocol=http&host=10.96.22.204&port=80&tries=1'] Namespace:nettest-8888 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:23:25.852: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:23:26.553: INFO: Tries: 10, in try: 3, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-8888, hostIp: 172.18.0.17, podIp: 10.244.2.94, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC }]" } Mar 25 10:23:29.424: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.94:9080/dial?request=hostName&protocol=http&host=10.96.22.204&port=80&tries=1'] Namespace:nettest-8888 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:23:29.424: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:23:29.694: INFO: Tries: 10, in try: 4, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-8888, hostIp: 172.18.0.17, podIp: 10.244.2.94, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC }]" } Mar 25 10:23:31.763: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.94:9080/dial?request=hostName&protocol=http&host=10.96.22.204&port=80&tries=1'] Namespace:nettest-8888 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:23:31.763: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:23:31.853: INFO: Tries: 10, in try: 5, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-8888, hostIp: 172.18.0.17, podIp: 10.244.2.94, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC }]" } Mar 25 10:23:33.932: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.94:9080/dial?request=hostName&protocol=http&host=10.96.22.204&port=80&tries=1'] Namespace:nettest-8888 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:23:33.932: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:23:34.281: INFO: Tries: 10, in try: 6, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-8888, hostIp: 172.18.0.17, podIp: 10.244.2.94, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC }]" } Mar 25 10:23:37.257: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.94:9080/dial?request=hostName&protocol=http&host=10.96.22.204&port=80&tries=1'] Namespace:nettest-8888 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:23:37.257: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:23:37.651: INFO: Tries: 10, in try: 7, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-8888, hostIp: 172.18.0.17, podIp: 10.244.2.94, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC }]" } Mar 25 10:23:40.059: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.94:9080/dial?request=hostName&protocol=http&host=10.96.22.204&port=80&tries=1'] Namespace:nettest-8888 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:23:40.059: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:23:40.686: INFO: Tries: 10, in try: 8, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-8888, hostIp: 172.18.0.17, podIp: 10.244.2.94, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC }]" } Mar 25 10:23:43.282: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.94:9080/dial?request=hostName&protocol=http&host=10.96.22.204&port=80&tries=1'] Namespace:nettest-8888 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:23:43.282: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:23:43.583: INFO: Tries: 10, in try: 9, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-8888, hostIp: 172.18.0.17, podIp: 10.244.2.94, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:23:01 +0000 UTC }]" } [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:23:45.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-8888" for this suite. • [SLOW TEST:75.656 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for client IP based session affinity: http [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]","total":54,"completed":10,"skipped":1618,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:323 [BeforeEach] Change stubDomain /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:23:45.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns-config-map STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to change stubDomain configuration [Slow][Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:323 STEP: Finding a DNS pod Mar 25 10:23:48.212: INFO: Using DNS pod: coredns-74ff55c5b-rfzq5 Mar 25 10:23:48.914: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-configmap-f4466de8-da26-4b47-9c06-4a366f91b1ce dns-config-map-5258 0793db55-1c22-430c-9f91-b299d761927b 1065299 0 2021-03-25 10:23:48 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-25 10:23:48 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":10101,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2gdkt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2gdkt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:10101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2gdkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:24:00.234: INFO: Created service &Service{ObjectMeta:{e2e-dns-configmap dns-config-map-5258 1fb6f5a1-3214-424e-b718-967d220e7c13 1065383 0 2021-03-25 10:23:59 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-25 10:23:59 +0000 UTC FieldsV1 {"f:spec":{"f:ports":{".":{},"k:{\"port\":10101,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{".":{},"f:app":{}},"f:sessionAffinity":{},"f:type":{}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:10101,TargetPort:{0 10101 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: e2e-dns-configmap,},ClusterIP:10.96.144.187,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:nil,ClusterIPs:[10.96.144.187],IPFamilies:[],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} Mar 25 10:24:00.328: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-d57a224f-fcd4-49a7-a5b6-157715af3a3b dns-config-map-5258 c10880e9-15af-4d59-8d83-55b62d235a99 1065389 0 2021-03-25 10:24:00 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-25 10:24:00 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-5d4m2,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:default-token-2gdkt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2gdkt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:default-token-2gdkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 { health ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } forward . 10.244.1.25 } acme.local:53 { forward . 10.244.1.25 }] BinaryData:map[]} Mar 25 10:24:09.741: INFO: ExecWithOptions {Command:[dig +short abc.acme.local] Namespace:dns-config-map-5258 PodName:e2e-dns-configmap-f4466de8-da26-4b47-9c06-4a366f91b1ce ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:24:09.741: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:24:25.961: INFO: Running dig: [dig +short abc.acme.local], stdout: ";; connection timed out; no servers could be reached", stderr: "", err: command terminated with exit code 9 Mar 25 10:24:26.962: INFO: ExecWithOptions {Command:[dig +short abc.acme.local] Namespace:dns-config-map-5258 PodName:e2e-dns-configmap-f4466de8-da26-4b47-9c06-4a366f91b1ce ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:24:26.962: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:24:27.914: INFO: Running dig: [dig +short abc.acme.local], stdout: "1.1.1.1", stderr: "", err: Mar 25 10:24:27.914: INFO: ExecWithOptions {Command:[dig +short def.acme.local] Namespace:dns-config-map-5258 PodName:e2e-dns-configmap-f4466de8-da26-4b47-9c06-4a366f91b1ce ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:24:27.914: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:24:28.032: INFO: Running dig: [dig +short def.acme.local], stdout: "2.2.2.2", stderr: "", err: Mar 25 10:24:28.033: INFO: ExecWithOptions {Command:[dig +short widget.local] Namespace:dns-config-map-5258 PodName:e2e-dns-configmap-f4466de8-da26-4b47-9c06-4a366f91b1ce ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:24:28.033: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:24:30.680: INFO: Running dig: [dig +short widget.local], stdout: "3.3.3.3", stderr: "", err: STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } ] BinaryData:map[]} Mar 25 10:24:34.354: INFO: ExecWithOptions {Command:[dig +short abc.acme.local] Namespace:dns-config-map-5258 PodName:e2e-dns-configmap-f4466de8-da26-4b47-9c06-4a366f91b1ce ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:24:34.354: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:24:44.802: INFO: Running dig: [dig +short abc.acme.local], stdout: "", stderr: "", err: STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } ] BinaryData:map[]} [AfterEach] Change stubDomain /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:24:52.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-config-map-5258" for this suite. • [SLOW TEST:67.405 seconds] [sig-network] DNS configMap nameserver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Change stubDomain /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:320 should be able to change stubDomain configuration [Slow][Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:323 ------------------------------ {"msg":"PASSED [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]","total":54,"completed":11,"skipped":1704,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Loadbalancing: L7 [Slow] Nginx should conform to Ingress spec /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722 [BeforeEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:24:53.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69 Mar 25 10:24:55.418: INFO: Found ClusterRoles; assuming RBAC is enabled. [BeforeEach] [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688 Mar 25 10:24:55.739: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706 STEP: No ingress created, no cleanup necessary [AfterEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:24:55.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-1040" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [3.380 seconds] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685 should conform to Ingress spec [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:24:56.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903 Mar 25 10:24:58.005: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:25:00.191: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:25:02.114: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:25:04.144: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Mar 25 10:25:04.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-4185 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Mar 25 10:25:22.879: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Mar 25 10:25:22.879: INFO: stdout: "iptables" Mar 25 10:25:22.879: INFO: proxyMode: iptables Mar 25 10:25:23.221: INFO: Waiting for pod kube-proxy-mode-detector to disappear Mar 25 10:25:23.613: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-4185 Mar 25 10:25:23.936: INFO: sourceip-test cluster ip: 10.96.163.75 STEP: Picking 2 Nodes to test whether source IP is preserved or not STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip Mar 25 10:25:24.503: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:25:26.556: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:25:29.045: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:25:30.703: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:25:33.234: INFO: The status of Pod echo-sourceip is Running (Ready = true) STEP: waiting up to 3m0s for service sourceip-test in namespace services-4185 to expose endpoints map[echo-sourceip:[8080]] Mar 25 10:25:33.728: INFO: successfully validated that service sourceip-test in namespace services-4185 exposes endpoints map[echo-sourceip:[8080]] STEP: Creating pause pod deployment Mar 25 10:25:35.523: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)} Mar 25 10:25:38.088: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264735, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264735, loc:(*time.Location)(0x99208a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"pause-pod-8687c95844\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264736, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264736, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Mar 25 10:25:40.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264736, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264736, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264738, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264735, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-8687c95844\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:25:41.602: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264736, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264736, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264738, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264735, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-8687c95844\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:25:44.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264736, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264736, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264738, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264735, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-8687c95844\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:25:45.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264736, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264736, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264738, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264735, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-8687c95844\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:25:47.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264736, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264736, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264747, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752264735, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-8687c95844\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 25 10:25:49.640: INFO: Waiting up to 2m0s to get response from 10.96.163.75:8080 Mar 25 10:25:49.640: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-4185 exec pause-pod-8687c95844-dfmbq -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.163.75:8080/clientip' Mar 25 10:25:49.967: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.96.163.75:8080/clientip\n" Mar 25 10:25:49.967: INFO: stdout: "10.244.1.36:41586" STEP: Verifying the preserved source ip Mar 25 10:25:49.967: INFO: Waiting up to 2m0s to get response from 10.96.163.75:8080 Mar 25 10:25:49.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-4185 exec pause-pod-8687c95844-vfxcn -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.163.75:8080/clientip' Mar 25 10:25:50.241: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.96.163.75:8080/clientip\n" Mar 25 10:25:50.241: INFO: stdout: "10.244.2.103:39678" STEP: Verifying the preserved source ip Mar 25 10:25:50.241: INFO: Deleting deployment Mar 25 10:25:50.332: INFO: Cleaning up the echo server pod Mar 25 10:25:50.502: INFO: Cleaning up the sourceip test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:25:51.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4185" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:55.307 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903 ------------------------------ {"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":54,"completed":12,"skipped":1814,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should update endpoints: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:25:51.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should update endpoints: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334 STEP: Performing setup for networking test in namespace nettest-7019 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 10:25:53.653: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 10:25:54.920: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:25:57.766: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:25:59.781: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:26:00.975: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:26:03.072: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:26:06.003: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:26:07.529: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:26:09.266: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:26:11.291: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:26:13.539: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:26:15.111: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:26:16.997: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 10:26:18.011: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 10:26:31.194: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 10:26:31.194: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 10:26:34.322: INFO: Service node-port-service in namespace nettest-7019 found. Mar 25 10:26:37.527: INFO: Service session-affinity-service in namespace nettest-7019 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 10:26:39.138: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 10:26:40.338: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(http) test-container-pod --> 10.96.26.143:80 (config.clusterIP) Mar 25 10:26:40.897: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:26:40.897: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:26:41.103: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 10:26:43.181: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:26:43.181: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:26:44.264: INFO: Waiting for responses: map[] Mar 25 10:26:44.264: INFO: reached 10.96.26.143 after 1/34 tries STEP: Deleting a pod which, will be replaced with a new endpoint Mar 25 10:26:44.614: INFO: Waiting for pod netserver-0 to disappear Mar 25 10:26:45.541: INFO: Pod netserver-0 no longer exists Mar 25 10:26:46.542: INFO: Waiting for amount of service:node-port-service endpoints to be 1 STEP: dialing(http) test-container-pod --> 10.96.26.143:80 (config.clusterIP) (endpoint recovery) Mar 25 10:26:52.219: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:26:52.219: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:26:52.602: INFO: Waiting for responses: map[] Mar 25 10:26:54.995: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:26:54.995: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:26:55.747: INFO: Waiting for responses: map[] Mar 25 10:26:58.475: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:26:58.475: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:26:59.632: INFO: Waiting for responses: map[] Mar 25 10:27:01.701: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:01.701: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:01.879: INFO: Waiting for responses: map[] Mar 25 10:27:04.024: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:04.024: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:04.876: INFO: Waiting for responses: map[] Mar 25 10:27:07.345: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:07.345: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:07.729: INFO: Waiting for responses: map[] Mar 25 10:27:09.875: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:09.875: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:10.033: INFO: Waiting for responses: map[] Mar 25 10:27:13.456: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:13.456: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:14.136: INFO: Waiting for responses: map[] Mar 25 10:27:16.614: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:16.614: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:17.858: INFO: Waiting for responses: map[] Mar 25 10:27:20.062: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:20.062: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:20.537: INFO: Waiting for responses: map[] Mar 25 10:27:22.738: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:22.738: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:23.182: INFO: Waiting for responses: map[] Mar 25 10:27:25.318: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:25.318: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:26.138: INFO: Waiting for responses: map[] Mar 25 10:27:28.619: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:28.619: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:29.824: INFO: Waiting for responses: map[] Mar 25 10:27:32.384: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:32.384: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:33.382: INFO: Waiting for responses: map[] Mar 25 10:27:35.436: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:35.436: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:35.748: INFO: Waiting for responses: map[] Mar 25 10:27:37.850: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:37.850: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:38.008: INFO: Waiting for responses: map[] Mar 25 10:27:41.144: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:41.145: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:42.715: INFO: Waiting for responses: map[] Mar 25 10:27:45.386: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:45.386: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:45.726: INFO: Waiting for responses: map[] Mar 25 10:27:48.174: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:48.174: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:48.486: INFO: Waiting for responses: map[] Mar 25 10:27:50.947: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:50.947: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:51.096: INFO: Waiting for responses: map[] Mar 25 10:27:54.079: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:54.079: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:54.613: INFO: Waiting for responses: map[] Mar 25 10:27:57.306: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:27:57.306: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:27:57.647: INFO: Waiting for responses: map[] Mar 25 10:28:00.331: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:28:00.331: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:28:01.635: INFO: Waiting for responses: map[] Mar 25 10:28:04.168: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:28:04.168: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:28:05.349: INFO: Waiting for responses: map[] Mar 25 10:28:07.805: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:28:07.805: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:28:09.539: INFO: Waiting for responses: map[] Mar 25 10:28:11.576: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:28:11.576: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:28:12.746: INFO: Waiting for responses: map[] Mar 25 10:28:15.619: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:28:15.619: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:28:17.185: INFO: Waiting for responses: map[] Mar 25 10:28:20.141: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:28:20.141: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:28:21.772: INFO: Waiting for responses: map[] Mar 25 10:28:24.100: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:28:24.100: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:28:25.550: INFO: Waiting for responses: map[] Mar 25 10:28:27.971: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:28:27.971: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:28:28.059: INFO: Waiting for responses: map[] Mar 25 10:28:30.570: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:28:30.570: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:28:31.758: INFO: Waiting for responses: map[] Mar 25 10:28:33.881: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:28:33.881: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:28:36.155: INFO: Waiting for responses: map[] Mar 25 10:28:38.874: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:28:38.875: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:28:39.617: INFO: Waiting for responses: map[] Mar 25 10:28:41.913: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:9080/dial?request=hostname&protocol=http&host=10.96.26.143&port=80&tries=1'] Namespace:nettest-7019 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:28:41.913: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:28:42.614: INFO: Waiting for responses: map[] Mar 25 10:28:42.614: INFO: reached 10.96.26.143 after 33/34 tries [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:28:42.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-7019" for this suite. • [SLOW TEST:173.405 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should update endpoints: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: http","total":54,"completed":13,"skipped":1928,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:332 [BeforeEach] Forward PTR lookup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:28:45.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns-config-map STEP: Waiting for a default service account to be provisioned in namespace [It] should forward PTR records lookup to upstream nameserver [Slow][Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:332 STEP: Finding a DNS pod Mar 25 10:28:48.279: INFO: Using DNS pod: coredns-74ff55c5b-dfbbm Mar 25 10:28:49.702: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-configmap-25899cf1-c44f-456c-a30e-6ff55fc84534 dns-config-map-5761 7296bc1d-d08f-4f25-9138-d1fea8fd2e5d 1068019 0 2021-03-25 10:28:49 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-25 10:28:48 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":10101,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xzcvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xzcvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:10101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xzcvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:29:01.415: INFO: Created service &Service{ObjectMeta:{e2e-dns-configmap dns-config-map-5761 58460a6a-6863-4368-820b-3411cc400383 1068109 0 2021-03-25 10:29:01 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-25 10:29:00 +0000 UTC FieldsV1 {"f:spec":{"f:ports":{".":{},"k:{\"port\":10101,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{".":{},"f:app":{}},"f:sessionAffinity":{},"f:type":{}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:10101,TargetPort:{0 10101 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: e2e-dns-configmap,},ClusterIP:10.96.207.139,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:nil,ClusterIPs:[10.96.207.139],IPFamilies:[],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} Mar 25 10:29:03.176: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-879985e4-3064-4554-b653-554d23db9f4b dns-config-map-5761 5636e1c2-fbab-4ba6-a64e-62a0d6e934f4 1068125 0 2021-03-25 10:29:02 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-25 10:29:02 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-2lhtd,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:default-token-xzcvx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xzcvx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:default-token-xzcvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:29:18.439: INFO: ExecWithOptions {Command:[dig +short -x 8.8.8.8] Namespace:dns-config-map-5761 PodName:e2e-dns-configmap-25899cf1-c44f-456c-a30e-6ff55fc84534 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:29:18.439: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:29:20.956: INFO: Running dig: [dig +short -x 8.8.8.8], stdout: "dns.google.", stderr: "", err: STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 { health ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } forward . 10.244.1.43 }] BinaryData:map[]} Mar 25 10:29:27.529: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-5761 PodName:e2e-dns-configmap-25899cf1-c44f-456c-a30e-6ff55fc84534 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:29:27.529: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:29:44.097: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: ";; connection timed out; no servers could be reached", stderr: "", err: command terminated with exit code 9 Mar 25 10:29:45.097: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-5761 PodName:e2e-dns-configmap-25899cf1-c44f-456c-a30e-6ff55fc84534 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:29:45.098: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:29:46.837: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "my.test.", stderr: "", err: STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } ] BinaryData:map[]} Mar 25 10:29:57.597: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-5761 PodName:e2e-dns-configmap-25899cf1-c44f-456c-a30e-6ff55fc84534 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:29:57.597: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:30:14.597: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: ";; connection timed out; no servers could be reached", stderr: "", err: command terminated with exit code 9 Mar 25 10:30:15.598: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-5761 PodName:e2e-dns-configmap-25899cf1-c44f-456c-a30e-6ff55fc84534 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:30:15.598: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:30:21.551: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } ] BinaryData:map[]} [AfterEach] Forward PTR lookup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:30:33.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-config-map-5761" for this suite. • [SLOW TEST:109.783 seconds] [sig-network] DNS configMap nameserver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Forward PTR lookup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:329 should forward PTR records lookup to upstream nameserver [Slow][Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:332 ------------------------------ {"msg":"PASSED [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","total":54,"completed":14,"skipped":1996,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod resolv.conf /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:30:34.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod resolv.conf /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458 STEP: Preparing a test DNS service with injected DNS names... Mar 25 10:30:36.851: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-6d4e94f6-1c87-465d-8264-f6c2d2b2025b dns-8585 d0f053a0-62a8-4021-8beb-2f4b64ad98a1 1068754 0 2021-03-25 10:30:36 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-25 10:30:36 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-5dw57,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:default-token-p92tf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p92tf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:default-token-p92tf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:30:49.734: INFO: testServerIP is 10.244.1.52 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 25 10:30:49.757: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils dns-8585 3a48e2fe-c805-432a-b63b-47287ce67192 1068865 0 2021-03-25 10:30:49 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-25 10:30:49 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p92tf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p92tf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p92tf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.1.52],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS option is configured on pod... Mar 25 10:30:58.286: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-8585 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:30:58.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized name server and search path are working... Mar 25 10:30:59.297: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-8585 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:30:59.297: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:30:59.595: INFO: Deleting pod e2e-dns-utils... Mar 25 10:31:01.108: INFO: Deleting pod e2e-configmap-dns-server-6d4e94f6-1c87-465d-8264-f6c2d2b2025b... Mar 25 10:31:03.992: INFO: Deleting configmap e2e-coredns-configmap-5dw57... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:31:05.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8585" for this suite. • [SLOW TEST:31.818 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should support configurable pod resolv.conf /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":54,"completed":15,"skipped":2101,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]"]} SSSSSSSSS ------------------------------ [sig-network] Services should check NodePort out-of-range /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:31:06.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should check NodePort out-of-range /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494 STEP: creating service nodeport-range-test with type NodePort in namespace services-7517 STEP: changing service nodeport-range-test to out-of-range NodePort 55468 STEP: deleting original service nodeport-range-test STEP: creating service nodeport-range-test with out-of-range NodePort 55468 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:31:14.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7517" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:8.606 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should check NodePort out-of-range /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494 ------------------------------ {"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":54,"completed":16,"skipped":2110,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]"]} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should update endpoints: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:31:15.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should update endpoints: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351 STEP: Performing setup for networking test in namespace nettest-1254 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 10:31:16.277: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 10:31:18.193: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:31:21.189: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:31:22.243: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:31:25.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:31:26.998: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:31:28.867: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:31:30.788: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:31:33.052: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:31:34.592: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:31:37.985: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:31:38.715: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:31:40.425: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:31:42.218: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:31:45.118: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:31:46.396: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:31:48.350: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 10:31:49.867: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 10:32:08.430: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 10:32:08.430: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 10:32:11.552: INFO: Service node-port-service in namespace nettest-1254 found. Mar 25 10:32:15.112: INFO: Service session-affinity-service in namespace nettest-1254 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 10:32:16.865: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 10:32:18.331: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(udp) test-container-pod --> 10.96.247.94:90 (config.clusterIP) Mar 25 10:32:18.418: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:32:18.418: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:32:19.683: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 10:32:21.997: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:32:21.997: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:32:23.383: INFO: Waiting for responses: map[] Mar 25 10:32:23.383: INFO: reached 10.96.247.94 after 1/34 tries STEP: Deleting a pod which, will be replaced with a new endpoint Mar 25 10:32:26.753: INFO: Waiting for pod netserver-0 to disappear Mar 25 10:32:27.021: INFO: Pod netserver-0 no longer exists Mar 25 10:32:28.022: INFO: Waiting for amount of service:node-port-service endpoints to be 1 STEP: dialing(udp) test-container-pod --> 10.96.247.94:90 (config.clusterIP) (endpoint recovery) Mar 25 10:32:33.690: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:32:33.691: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:32:34.423: INFO: Waiting for responses: map[] Mar 25 10:32:36.565: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:32:36.565: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:32:37.777: INFO: Waiting for responses: map[] Mar 25 10:32:39.853: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:32:39.853: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:32:40.391: INFO: Waiting for responses: map[] Mar 25 10:32:42.758: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:32:42.758: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:32:43.715: INFO: Waiting for responses: map[] Mar 25 10:32:46.002: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:32:46.002: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:32:46.522: INFO: Waiting for responses: map[] Mar 25 10:32:48.781: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:32:48.781: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:32:49.530: INFO: Waiting for responses: map[] Mar 25 10:32:51.906: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:32:51.906: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:32:52.778: INFO: Waiting for responses: map[] Mar 25 10:32:55.813: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:32:55.814: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:32:57.016: INFO: Waiting for responses: map[] Mar 25 10:33:00.082: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:00.082: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:01.311: INFO: Waiting for responses: map[] Mar 25 10:33:03.827: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:03.827: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:04.266: INFO: Waiting for responses: map[] Mar 25 10:33:06.578: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:06.578: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:07.359: INFO: Waiting for responses: map[] Mar 25 10:33:09.465: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:09.465: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:10.179: INFO: Waiting for responses: map[] Mar 25 10:33:12.728: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:12.728: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:14.561: INFO: Waiting for responses: map[] Mar 25 10:33:17.709: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:17.709: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:19.703: INFO: Waiting for responses: map[] Mar 25 10:33:21.836: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:21.836: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:22.427: INFO: Waiting for responses: map[] Mar 25 10:33:24.832: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:24.832: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:26.320: INFO: Waiting for responses: map[] Mar 25 10:33:29.117: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:29.117: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:30.577: INFO: Waiting for responses: map[] Mar 25 10:33:32.752: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:32.752: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:33.423: INFO: Waiting for responses: map[] Mar 25 10:33:36.181: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:36.181: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:37.285: INFO: Waiting for responses: map[] Mar 25 10:33:40.263: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:40.263: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:41.370: INFO: Waiting for responses: map[] Mar 25 10:33:43.457: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:43.457: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:44.054: INFO: Waiting for responses: map[] Mar 25 10:33:46.672: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:46.672: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:47.995: INFO: Waiting for responses: map[] Mar 25 10:33:50.111: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:50.111: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:50.333: INFO: Waiting for responses: map[] Mar 25 10:33:52.359: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:52.359: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:52.541: INFO: Waiting for responses: map[] Mar 25 10:33:55.505: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:55.505: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:56.093: INFO: Waiting for responses: map[] Mar 25 10:33:58.740: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:33:58.740: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:33:59.527: INFO: Waiting for responses: map[] Mar 25 10:34:01.741: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:34:01.741: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:34:02.615: INFO: Waiting for responses: map[] Mar 25 10:34:04.620: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:34:04.620: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:34:05.644: INFO: Waiting for responses: map[] Mar 25 10:34:07.717: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:34:07.717: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:34:08.384: INFO: Waiting for responses: map[] Mar 25 10:34:10.508: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:34:10.508: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:34:11.071: INFO: Waiting for responses: map[] Mar 25 10:34:13.141: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:34:13.141: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:34:14.544: INFO: Waiting for responses: map[] Mar 25 10:34:16.989: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:34:16.989: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:34:17.224: INFO: Waiting for responses: map[] Mar 25 10:34:20.113: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:34:20.113: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:34:20.981: INFO: Waiting for responses: map[] Mar 25 10:34:23.510: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.58:9080/dial?request=hostname&protocol=udp&host=10.96.247.94&port=90&tries=1'] Namespace:nettest-1254 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:34:23.511: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:34:24.759: INFO: Waiting for responses: map[] Mar 25 10:34:24.759: INFO: reached 10.96.247.94 after 33/34 tries [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:34:24.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-1254" for this suite. • [SLOW TEST:192.559 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should update endpoints: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: udp","total":54,"completed":17,"skipped":2124,"failed":1,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:34:27.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177 STEP: creating service externalip-test with type=clusterIP in namespace services-3274 STEP: creating replication controller externalip-test in namespace services-3274 I0325 10:34:32.671760 7 runners.go:190] Created replication controller with name: externalip-test, namespace: services-3274, replica count: 2 I0325 10:34:35.722387 7 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:34:38.722792 7 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:34:41.722951 7 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:34:44.724138 7 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:34:47.724387 7 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:34:50.724959 7 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 10:34:50.725: INFO: Creating new exec pod E0325 10:35:03.385573 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:35:04.804633 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:35:07.389890 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:35:12.468030 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:35:19.511064 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:35:36.601357 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:36:23.728475 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0325 10:37:03.193947 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 25 10:37:03.383: FAIL: Unexpected error: <*errors.errorString | 0xc0040945f0>: { s: "no subset of available IP address found for the endpoint externalip-test within timeout 2m0s", } no subset of available IP address found for the endpoint externalip-test within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.12() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1201 +0x30f k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00294cf00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00294cf00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00294cf00, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-3274". STEP: Found 14 events. Mar 25 10:37:03.533: INFO: At 2021-03-25 10:34:33 +0000 UTC - event for externalip-test: {replication-controller } SuccessfulCreate: Created pod: externalip-test-ggh99 Mar 25 10:37:03.533: INFO: At 2021-03-25 10:34:34 +0000 UTC - event for externalip-test: {replication-controller } SuccessfulCreate: Created pod: externalip-test-bxvm7 Mar 25 10:37:03.533: INFO: At 2021-03-25 10:34:34 +0000 UTC - event for externalip-test-ggh99: {default-scheduler } Scheduled: Successfully assigned services-3274/externalip-test-ggh99 to latest-worker2 Mar 25 10:37:03.533: INFO: At 2021-03-25 10:34:35 +0000 UTC - event for externalip-test-bxvm7: {default-scheduler } Scheduled: Successfully assigned services-3274/externalip-test-bxvm7 to latest-worker2 Mar 25 10:37:03.533: INFO: At 2021-03-25 10:34:38 +0000 UTC - event for externalip-test-ggh99: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:37:03.533: INFO: At 2021-03-25 10:34:40 +0000 UTC - event for externalip-test-bxvm7: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:37:03.533: INFO: At 2021-03-25 10:34:44 +0000 UTC - event for externalip-test-ggh99: {kubelet latest-worker2} Created: Created container externalip-test Mar 25 10:37:03.533: INFO: At 2021-03-25 10:34:46 +0000 UTC - event for externalip-test-bxvm7: {kubelet latest-worker2} Created: Created container externalip-test Mar 25 10:37:03.533: INFO: At 2021-03-25 10:34:46 +0000 UTC - event for externalip-test-ggh99: {kubelet latest-worker2} Started: Started container externalip-test Mar 25 10:37:03.533: INFO: At 2021-03-25 10:34:47 +0000 UTC - event for externalip-test-bxvm7: {kubelet latest-worker2} Started: Started container externalip-test Mar 25 10:37:03.533: INFO: At 2021-03-25 10:34:51 +0000 UTC - event for execpodzxwgb: {default-scheduler } Scheduled: Successfully assigned services-3274/execpodzxwgb to latest-worker Mar 25 10:37:03.533: INFO: At 2021-03-25 10:34:54 +0000 UTC - event for execpodzxwgb: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 25 10:37:03.533: INFO: At 2021-03-25 10:34:59 +0000 UTC - event for execpodzxwgb: {kubelet latest-worker} Created: Created container agnhost-container Mar 25 10:37:03.533: INFO: At 2021-03-25 10:35:00 +0000 UTC - event for execpodzxwgb: {kubelet latest-worker} Started: Started container agnhost-container Mar 25 10:37:03.903: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 10:37:03.903: INFO: execpodzxwgb latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:34:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:35:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:35:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:34:50 +0000 UTC }] Mar 25 10:37:03.903: INFO: externalip-test-bxvm7 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:34:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:34:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:34:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:34:34 +0000 UTC }] Mar 25 10:37:03.903: INFO: externalip-test-ggh99 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:34:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:34:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:34:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:34:33 +0000 UTC }] Mar 25 10:37:03.903: INFO: Mar 25 10:37:04.441: INFO: Logging node info for node latest-control-plane Mar 25 10:37:04.936: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1070121 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:33:42 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:33:42 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:33:42 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:33:42 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:37:04.937: INFO: Logging kubelet events for node latest-control-plane Mar 25 10:37:05.299: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 10:37:05.862: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:05.862: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 10:37:05.862: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:05.862: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 10:37:05.862: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:05.862: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 25 10:37:05.862: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:05.862: INFO: Container etcd ready: true, restart count 0 Mar 25 10:37:05.862: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:05.862: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 10:37:05.862: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:05.862: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:37:05.862: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:05.862: INFO: Container kube-proxy ready: true, restart count 0 W0325 10:37:06.750910 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:37:08.102: INFO: Latency metrics for node latest-control-plane Mar 25 10:37:08.102: INFO: Logging node info for node latest-worker Mar 25 10:37:08.778: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1071688 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 10:23:27 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:37:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:37:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:37:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:37:02 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:37:08.778: INFO: Logging kubelet events for node latest-worker Mar 25 10:37:09.506: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 10:37:09.900: INFO: webserver-deployment-795d758f88-vldpk started at 2021-03-25 10:36:56 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:09.900: INFO: Container httpd ready: false, restart count 0 Mar 25 10:37:09.900: INFO: webserver-deployment-795d758f88-qrzf2 started at 2021-03-25 10:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:09.900: INFO: Container httpd ready: false, restart count 0 Mar 25 10:37:09.900: INFO: ss-1 started at 2021-03-25 10:35:58 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:09.900: INFO: Container webserver ready: false, restart count 0 Mar 25 10:37:09.900: INFO: webserver-deployment-847dcfb7fb-mqgls started at (0+0 container statuses recorded) Mar 25 10:37:09.900: INFO: webserver-deployment-847dcfb7fb-8cpx8 started at (0+0 container statuses recorded) Mar 25 10:37:09.900: INFO: webserver-deployment-847dcfb7fb-78nm7 started at (0+0 container statuses recorded) Mar 25 10:37:09.900: INFO: webserver-deployment-847dcfb7fb-hzw8x started at 2021-03-25 10:36:26 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:09.900: INFO: Container httpd ready: true, restart count 0 Mar 25 10:37:09.900: INFO: webserver-deployment-847dcfb7fb-s8fzx started at (0+0 container statuses recorded) Mar 25 10:37:09.900: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:09.900: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:37:09.900: INFO: webserver-deployment-795d758f88-72fqd started at (0+0 container statuses recorded) Mar 25 10:37:09.900: INFO: ss-0 started at 2021-03-25 10:36:16 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:09.900: INFO: Container webserver ready: true, restart count 0 Mar 25 10:37:09.900: INFO: webserver-deployment-795d758f88-b8wp5 started at 2021-03-25 10:36:54 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:09.900: INFO: Container httpd ready: false, restart count 0 Mar 25 10:37:09.900: INFO: webserver-deployment-795d758f88-kh95j started at (0+0 container statuses recorded) Mar 25 10:37:09.900: INFO: kindnet-485hg started at 2021-03-25 10:20:57 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:09.900: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:37:09.900: INFO: webserver-deployment-795d758f88-qmw89 started at (0+0 container statuses recorded) Mar 25 10:37:09.900: INFO: webserver-deployment-795d758f88-qvwwb started at (0+0 container statuses recorded) Mar 25 10:37:09.900: INFO: webserver-deployment-847dcfb7fb-f89mr started at 2021-03-25 10:37:07 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:09.900: INFO: Container httpd ready: false, restart count 0 Mar 25 10:37:09.900: INFO: webserver-deployment-847dcfb7fb-5xsmf started at 2021-03-25 10:36:26 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:09.900: INFO: Container httpd ready: true, restart count 0 Mar 25 10:37:09.900: INFO: execpodzxwgb started at 2021-03-25 10:34:51 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:09.900: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 10:37:09.900: INFO: webserver-deployment-847dcfb7fb-8ntmc started at 2021-03-25 10:36:26 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:09.900: INFO: Container httpd ready: true, restart count 0 Mar 25 10:37:09.900: INFO: webserver-deployment-795d758f88-nn65d started at (0+0 container statuses recorded) Mar 25 10:37:09.900: INFO: coredns-74ff55c5b-pgfgz started at 2021-03-25 10:30:32 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:09.900: INFO: Container coredns ready: true, restart count 0 W0325 10:37:10.246231 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:37:13.191: INFO: Latency metrics for node latest-worker Mar 25 10:37:13.192: INFO: Logging node info for node latest-worker2 Mar 25 10:37:15.054: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1071841 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:56:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 09:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 10:33:09 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:37:12 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:37:12 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:37:12 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:37:12 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 10:37:15.055: INFO: Logging kubelet events for node latest-worker2 Mar 25 10:37:16.109: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 10:37:16.451: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.451: INFO: Container volume-tester ready: false, restart count 0 Mar 25 10:37:16.452: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 10:37:16.452: INFO: externalip-test-ggh99 started at 2021-03-25 10:34:34 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container externalip-test ready: true, restart count 0 Mar 25 10:37:16.452: INFO: webserver-deployment-847dcfb7fb-rwjjd started at 2021-03-25 10:36:26 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container httpd ready: true, restart count 0 Mar 25 10:37:16.452: INFO: webserver-deployment-847dcfb7fb-wdhxq started at 2021-03-25 10:37:07 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container httpd ready: false, restart count 0 Mar 25 10:37:16.452: INFO: webserver-deployment-847dcfb7fb-gxtbv started at 2021-03-25 10:36:27 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container httpd ready: true, restart count 0 Mar 25 10:37:16.452: INFO: webserver-deployment-847dcfb7fb-xcptq started at 2021-03-25 10:37:08 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container httpd ready: false, restart count 0 Mar 25 10:37:16.452: INFO: webserver-deployment-847dcfb7fb-rkgk4 started at 2021-03-25 10:37:08 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container httpd ready: false, restart count 0 Mar 25 10:37:16.452: INFO: webserver-deployment-847dcfb7fb-r6dwn started at (0+0 container statuses recorded) Mar 25 10:37:16.452: INFO: coredns-74ff55c5b-q9zdq started at 2021-03-25 10:30:26 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container coredns ready: true, restart count 0 Mar 25 10:37:16.452: INFO: webserver-deployment-847dcfb7fb-g7b7g started at 2021-03-25 10:36:26 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container httpd ready: true, restart count 0 Mar 25 10:37:16.452: INFO: webserver-deployment-795d758f88-jrggm started at 2021-03-25 10:36:54 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container httpd ready: false, restart count 0 Mar 25 10:37:16.452: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 10:37:16.452: INFO: webserver-deployment-847dcfb7fb-hcp85 started at 2021-03-25 10:37:08 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container httpd ready: false, restart count 0 Mar 25 10:37:16.452: INFO: webserver-deployment-847dcfb7fb-2htzf started at 2021-03-25 10:37:08 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container httpd ready: false, restart count 0 Mar 25 10:37:16.452: INFO: webserver-deployment-847dcfb7fb-94bq7 started at 2021-03-25 10:37:07 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container httpd ready: false, restart count 0 Mar 25 10:37:16.452: INFO: webserver-deployment-795d758f88-lnjrs started at 2021-03-25 10:36:54 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container httpd ready: false, restart count 0 Mar 25 10:37:16.452: INFO: webserver-deployment-795d758f88-q29h6 started at (0+0 container statuses recorded) Mar 25 10:37:16.452: INFO: webserver-deployment-795d758f88-6vs64 started at (0+0 container statuses recorded) Mar 25 10:37:16.452: INFO: webserver-deployment-795d758f88-bpzkp started at (0+0 container statuses recorded) Mar 25 10:37:16.452: INFO: externalip-test-bxvm7 started at 2021-03-25 10:34:35 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container externalip-test ready: true, restart count 0 Mar 25 10:37:16.452: INFO: webserver-deployment-847dcfb7fb-zp6cw started at 2021-03-25 10:36:26 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container httpd ready: true, restart count 0 Mar 25 10:37:16.452: INFO: webserver-deployment-847dcfb7fb-bb5xq started at 2021-03-25 10:36:27 +0000 UTC (0+1 container statuses recorded) Mar 25 10:37:16.452: INFO: Container httpd ready: true, restart count 0 W0325 10:37:17.819245 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 10:37:18.927: INFO: Latency metrics for node latest-worker2 Mar 25 10:37:18.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3274" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [173.256 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177 Mar 25 10:37:03.383: Unexpected error: <*errors.errorString | 0xc0040945f0>: { s: "no subset of available IP address found for the endpoint externalip-test within timeout 2m0s", } no subset of available IP address found for the endpoint externalip-test within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1201 ------------------------------ {"msg":"FAILED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":54,"completed":17,"skipped":2241,"failed":2,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:37:21.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for client IP based session affinity: udp [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434 STEP: Performing setup for networking test in namespace nettest-7853 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 10:37:27.144: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 10:37:32.749: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:37:35.523: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:37:36.989: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:37:39.754: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:37:42.180: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:37:43.648: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:37:45.551: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:37:47.005: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:37:49.102: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:37:51.383: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:37:52.790: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:37:55.759: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:37:57.432: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:37:59.145: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:38:00.986: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:38:03.348: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:38:05.305: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:38:07.362: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 10:38:09.287: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 10:38:23.510: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 10:38:23.510: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 10:38:26.439: INFO: Service node-port-service in namespace nettest-7853 found. Mar 25 10:38:29.551: INFO: Service session-affinity-service in namespace nettest-7853 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 10:38:31.227: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 10:38:32.300: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(udp) test-container-pod --> 10.96.6.122:90 Mar 25 10:38:32.964: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.164:9080/dial?request=hostName&protocol=udp&host=10.96.6.122&port=90&tries=1'] Namespace:nettest-7853 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:38:32.965: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:38:33.996: INFO: Tries: 10, in try: 0, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7853, hostIp: 172.18.0.17, podIp: 10.244.2.164, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:11 +0000 UTC }]" } Mar 25 10:38:36.136: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.164:9080/dial?request=hostName&protocol=udp&host=10.96.6.122&port=90&tries=1'] Namespace:nettest-7853 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:38:36.136: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:38:36.439: INFO: Tries: 10, in try: 1, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7853, hostIp: 172.18.0.17, podIp: 10.244.2.164, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:11 +0000 UTC }]" } Mar 25 10:38:38.988: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.164:9080/dial?request=hostName&protocol=udp&host=10.96.6.122&port=90&tries=1'] Namespace:nettest-7853 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:38:38.988: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:38:39.363: INFO: Tries: 10, in try: 2, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7853, hostIp: 172.18.0.17, podIp: 10.244.2.164, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:11 +0000 UTC }]" } Mar 25 10:38:41.376: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.164:9080/dial?request=hostName&protocol=udp&host=10.96.6.122&port=90&tries=1'] Namespace:nettest-7853 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:38:41.376: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:38:41.781: INFO: Tries: 10, in try: 3, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7853, hostIp: 172.18.0.17, podIp: 10.244.2.164, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:11 +0000 UTC }]" } Mar 25 10:38:43.857: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.164:9080/dial?request=hostName&protocol=udp&host=10.96.6.122&port=90&tries=1'] Namespace:nettest-7853 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:38:43.857: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:38:44.771: INFO: Tries: 10, in try: 4, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7853, hostIp: 172.18.0.17, podIp: 10.244.2.164, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:11 +0000 UTC }]" } Mar 25 10:38:46.826: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.164:9080/dial?request=hostName&protocol=udp&host=10.96.6.122&port=90&tries=1'] Namespace:nettest-7853 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:38:46.826: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:38:46.979: INFO: Tries: 10, in try: 5, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7853, hostIp: 172.18.0.17, podIp: 10.244.2.164, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:11 +0000 UTC }]" } Mar 25 10:38:49.002: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.164:9080/dial?request=hostName&protocol=udp&host=10.96.6.122&port=90&tries=1'] Namespace:nettest-7853 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:38:49.002: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:38:49.390: INFO: Tries: 10, in try: 6, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7853, hostIp: 172.18.0.17, podIp: 10.244.2.164, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:11 +0000 UTC }]" } Mar 25 10:38:51.920: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.164:9080/dial?request=hostName&protocol=udp&host=10.96.6.122&port=90&tries=1'] Namespace:nettest-7853 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:38:51.920: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:38:52.566: INFO: Tries: 10, in try: 7, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7853, hostIp: 172.18.0.17, podIp: 10.244.2.164, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:11 +0000 UTC }]" } Mar 25 10:38:54.608: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.164:9080/dial?request=hostName&protocol=udp&host=10.96.6.122&port=90&tries=1'] Namespace:nettest-7853 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:38:54.608: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:38:54.792: INFO: Tries: 10, in try: 8, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7853, hostIp: 172.18.0.17, podIp: 10.244.2.164, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:11 +0000 UTC }]" } Mar 25 10:38:56.832: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.164:9080/dial?request=hostName&protocol=udp&host=10.96.6.122&port=90&tries=1'] Namespace:nettest-7853 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:38:56.832: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:38:58.062: INFO: Tries: 10, in try: 9, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7853, hostIp: 172.18.0.17, podIp: 10.244.2.164, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:38:11 +0000 UTC }]" } [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:39:00.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-7853" for this suite. • [SLOW TEST:99.595 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for client IP based session affinity: udp [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]","total":54,"completed":18,"skipped":2280,"failed":2,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196 [BeforeEach] [sig-network] NetworkPolicy API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:39:00.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename networkpolicies STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating NetworkPolicy API operations /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Mar 25 10:39:01.476: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Mar 25 10:39:01.557: INFO: starting watch STEP: patching STEP: updating Mar 25 10:39:01.883: INFO: waiting for watch events with expected annotations Mar 25 10:39:01.883: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"} Mar 25 10:39:01.883: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] NetworkPolicy API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:39:02.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "networkpolicies-3488" for this suite. •{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":54,"completed":19,"skipped":2395,"failed":2,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Firewall rule should have correct firewall rules for e2e cluster /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204 [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:39:03.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename firewall-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61 Mar 25 10:39:04.363: INFO: Only supported for providers [gce] (not local) [AfterEach] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:39:04.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "firewall-test-6242" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [1.089 seconds] [sig-network] Firewall rule /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have correct firewall rules for e2e cluster [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204 Only supported for providers [gce] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should implement service.kubernetes.io/headless /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:39:04.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should implement service.kubernetes.io/headless /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916 STEP: creating service-headless in namespace services-2688 STEP: creating service service-headless in namespace services-2688 STEP: creating replication controller service-headless in namespace services-2688 I0325 10:39:04.968611 7 runners.go:190] Created replication controller with name: service-headless, namespace: services-2688, replica count: 3 I0325 10:39:08.018965 7 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:39:11.019183 7 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:39:14.020761 7 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:39:17.021187 7 runners.go:190] service-headless Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:39:20.021386 7 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: creating service in namespace services-2688 STEP: creating service service-headless-toggled in namespace services-2688 STEP: creating replication controller service-headless-toggled in namespace services-2688 I0325 10:39:21.412101 7 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-2688, replica count: 3 I0325 10:39:24.463310 7 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:39:27.463498 7 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:39:30.463719 7 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:39:33.463873 7 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:39:36.464067 7 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0325 10:39:39.465109 7 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: verifying service is up Mar 25 10:39:39.911: INFO: Creating new host exec pod Mar 25 10:39:41.568: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:39:43.682: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:39:45.661: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:39:48.804: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:39:50.170: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:39:51.913: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Mar 25 10:39:51.913: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Mar 25 10:39:58.456: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.220.21:80 2>&1 || true; echo; done" in pod services-2688/verify-service-up-host-exec-pod Mar 25 10:39:58.456: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2688 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.220.21:80 2>&1 || true; echo; done' Mar 25 10:40:13.881: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n" Mar 25 10:40:13.881: INFO: stdout: "service-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\n" Mar 25 10:40:13.882: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.220.21:80 2>&1 || true; echo; done" in pod services-2688/verify-service-up-exec-pod-hpzsz Mar 25 10:40:13.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2688 exec verify-service-up-exec-pod-hpzsz -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.220.21:80 2>&1 || true; echo; done' Mar 25 10:40:15.373: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n" Mar 25 10:40:15.374: INFO: stdout: "service-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2688 STEP: Deleting pod verify-service-up-exec-pod-hpzsz in namespace services-2688 STEP: verifying service-headless is not up Mar 25 10:40:17.703: INFO: Creating new host exec pod Mar 25 10:40:20.577: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:40:23.197: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:40:24.796: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:40:26.614: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:40:29.612: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Mar 25 10:40:29.612: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2688 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.41.168:80 && echo service-down-failed' Mar 25 10:40:33.390: INFO: rc: 28 Mar 25 10:40:33.390: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.41.168:80 && echo service-down-failed" in pod services-2688/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2688 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.41.168:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.96.41.168:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2688 STEP: adding service.kubernetes.io/headless label STEP: verifying service is not up Mar 25 10:40:36.613: INFO: Creating new host exec pod Mar 25 10:40:36.891: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:40:39.269: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:40:41.198: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:40:43.383: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:40:45.254: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Mar 25 10:40:45.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2688 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.220.21:80 && echo service-down-failed' Mar 25 10:40:48.762: INFO: rc: 28 Mar 25 10:40:48.763: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.220.21:80 && echo service-down-failed" in pod services-2688/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2688 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.220.21:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.96.220.21:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2688 STEP: removing service.kubernetes.io/headless annotation STEP: verifying service is up Mar 25 10:40:51.342: INFO: Creating new host exec pod Mar 25 10:40:51.710: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:40:53.730: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:40:55.858: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Mar 25 10:40:55.858: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Mar 25 10:41:04.316: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.220.21:80 2>&1 || true; echo; done" in pod services-2688/verify-service-up-host-exec-pod Mar 25 10:41:04.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2688 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.220.21:80 2>&1 || true; echo; done' Mar 25 10:41:04.904: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n" Mar 25 10:41:04.904: INFO: stdout: "service-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\n" Mar 25 10:41:04.904: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.220.21:80 2>&1 || true; echo; done" in pod services-2688/verify-service-up-exec-pod-t5q7k Mar 25 10:41:04.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2688 exec verify-service-up-exec-pod-t5q7k -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.220.21:80 2>&1 || true; echo; done' Mar 25 10:41:05.368: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.220.21:80\n+ echo\n" Mar 25 10:41:05.368: INFO: stdout: "service-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-fsphw\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-j4ln8\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-fsphw\nservice-headless-toggled-nxxqp\nservice-headless-toggled-j4ln8\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2688 STEP: Deleting pod verify-service-up-exec-pod-t5q7k in namespace services-2688 STEP: verifying service-headless is still not up Mar 25 10:41:07.726: INFO: Creating new host exec pod Mar 25 10:41:08.237: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:41:10.787: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:41:12.926: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:41:14.319: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:41:16.563: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:41:18.441: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Mar 25 10:41:18.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2688 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.41.168:80 && echo service-down-failed' Mar 25 10:41:22.982: INFO: rc: 28 Mar 25 10:41:22.983: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.41.168:80 && echo service-down-failed" in pod services-2688/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-2688 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.41.168:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.96.41.168:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-2688 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:41:24.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2688" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:140.616 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should implement service.kubernetes.io/headless /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916 ------------------------------ {"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":54,"completed":20,"skipped":2632,"failed":2,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]} SSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for node-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:41:25.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for node-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 STEP: Performing setup for networking test in namespace nettest-5146 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 10:41:26.228: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 10:41:27.223: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:41:29.402: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:41:31.953: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:41:33.354: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:41:35.408: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:41:37.954: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:41:39.680: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:41:41.270: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:41:43.245: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:41:45.255: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:41:47.493: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:41:49.792: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:41:51.288: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:41:53.275: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 10:41:53.382: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 10:41:55.404: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 10:42:01.976: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 10:42:01.976: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 10:42:03.543: INFO: Service node-port-service in namespace nettest-5146 found. Mar 25 10:42:05.447: INFO: Service session-affinity-service in namespace nettest-5146 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 10:42:06.986: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 10:42:08.353: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(http) 172.18.0.17 (node) --> 10.96.56.46:80 (config.clusterIP) Mar 25 10:42:08.917: INFO: Going to poll 10.96.56.46 on port 80 at least 0 times, with a maximum of 34 tries before failing Mar 25 10:42:09.139: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.96.56.46:80/hostName | grep -v '^\s*$'] Namespace:nettest-5146 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:42:09.139: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:42:09.267: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 25 10:42:11.427: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.96.56.46:80/hostName | grep -v '^\s*$'] Namespace:nettest-5146 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:42:11.427: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:42:11.545: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1] STEP: dialing(http) 172.18.0.17 (node) --> 172.18.0.17:30289 (nodeIP) Mar 25 10:42:11.545: INFO: Going to poll 172.18.0.17 on port 30289 at least 0 times, with a maximum of 34 tries before failing Mar 25 10:42:11.612: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:30289/hostName | grep -v '^\s*$'] Namespace:nettest-5146 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:42:11.612: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:42:11.902: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 25 10:42:13.922: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:30289/hostName | grep -v '^\s*$'] Namespace:nettest-5146 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:42:13.922: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:42:14.260: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 25 10:42:16.732: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:30289/hostName | grep -v '^\s*$'] Namespace:nettest-5146 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:42:16.732: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:42:16.984: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 25 10:42:19.043: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:30289/hostName | grep -v '^\s*$'] Namespace:nettest-5146 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:42:19.043: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:42:19.319: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 25 10:42:21.450: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:30289/hostName | grep -v '^\s*$'] Namespace:nettest-5146 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:42:21.450: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:42:21.620: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 25 10:42:23.715: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:30289/hostName | grep -v '^\s*$'] Namespace:nettest-5146 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:42:23.715: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:42:23.875: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:42:23.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-5146" for this suite. • [SLOW TEST:58.855 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for node-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for node-Service: http","total":54,"completed":21,"skipped":2635,"failed":2,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] ESIPP [Slow] should handle updates to ExternalTrafficPolicy field /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095 [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:42:24.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 Mar 25 10:42:24.322: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:42:24.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-182" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.372 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should handle updates to ExternalTrafficPolicy field [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:42:24.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for multiple endpoint-Services with same selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289 STEP: Performing setup for networking test in namespace nettest-9857 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 10:42:24.741: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 10:42:25.311: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:42:27.893: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:42:29.336: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:42:31.380: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:42:33.317: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:42:35.537: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:42:37.829: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:42:39.493: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:42:41.408: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:42:43.949: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:42:45.343: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:42:47.768: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:42:49.887: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:42:51.799: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 10:42:52.715: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 10:43:02.820: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 10:43:02.820: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 10:43:09.546: INFO: Service node-port-service in namespace nettest-9857 found. Mar 25 10:43:12.050: INFO: Service session-affinity-service in namespace nettest-9857 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 10:43:13.105: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 10:43:14.110: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: creating a second service with same selector Mar 25 10:43:14.534: INFO: Service second-node-port-service in namespace nettest-9857 found. Mar 25 10:43:16.134: INFO: Waiting for amount of service:second-node-port-service endpoints to be 2 STEP: dialing(http) netserver-0 (endpoint) --> 10.96.187.169:80 (config.clusterIP) Mar 25 10:43:17.116: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=10.96.187.169&port=80&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:17.116: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:17.819: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:43:19.900: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=10.96.187.169&port=80&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:19.900: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:20.112: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:43:22.613: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=10.96.187.169&port=80&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:22.613: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:23.170: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:43:25.695: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=10.96.187.169&port=80&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:25.696: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:25.868: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:43:27.991: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=10.96.187.169&port=80&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:27.991: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:28.468: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:43:30.564: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=10.96.187.169&port=80&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:30.564: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:30.832: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:43:33.140: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=10.96.187.169&port=80&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:33.140: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:33.476: INFO: Waiting for responses: map[] Mar 25 10:43:33.476: INFO: reached 10.96.187.169 after 6/34 tries STEP: dialing(http) netserver-0 (endpoint) --> 172.18.0.17:32570 (nodeIP) Mar 25 10:43:33.894: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32570&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:33.894: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:34.278: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 10:43:36.589: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32570&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:36.589: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:36.956: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 10:43:39.367: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32570&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:39.367: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:39.949: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 10:43:42.073: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32570&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:42.073: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:42.413: INFO: Waiting for responses: map[] Mar 25 10:43:42.413: INFO: reached 172.18.0.17 after 3/34 tries STEP: dialing(http) netserver-0 (endpoint) --> 10.96.86.153:80 (svc2.clusterIP) Mar 25 10:43:42.557: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=10.96.86.153&port=80&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:42.557: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:42.816: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 10:43:45.487: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=10.96.86.153&port=80&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:45.487: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:45.660: INFO: Waiting for responses: map[netserver-0:{}] Mar 25 10:43:47.773: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=10.96.86.153&port=80&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:47.773: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:48.011: INFO: Waiting for responses: map[] Mar 25 10:43:48.011: INFO: reached 10.96.86.153 after 2/34 tries STEP: dialing(http) netserver-0 (endpoint) --> 172.18.0.17:32073 (nodeIP) Mar 25 10:43:48.357: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32073&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:48.357: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:48.635: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:43:50.806: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32073&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:50.806: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:51.476: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:43:53.650: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32073&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:53.650: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:53.922: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:43:56.038: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32073&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:43:56.038: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:43:56.270: INFO: Waiting for responses: map[] Mar 25 10:43:56.271: INFO: reached 172.18.0.17 after 3/34 tries STEP: deleting the original node port service STEP: dialing(http) netserver-0 (endpoint) --> 10.96.86.153:80 (svc2.clusterIP) Mar 25 10:44:13.044: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=10.96.86.153&port=80&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:44:13.044: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:44:13.546: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:44:16.505: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=10.96.86.153&port=80&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:44:16.505: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:44:17.028: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:44:19.471: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=10.96.86.153&port=80&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:44:19.471: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:44:20.179: INFO: Waiting for responses: map[] Mar 25 10:44:20.179: INFO: reached 10.96.86.153 after 2/34 tries STEP: dialing(http) netserver-0 (endpoint) --> 172.18.0.17:32073 (nodeIP) Mar 25 10:44:21.136: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32073&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:44:21.136: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:44:22.115: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:44:24.125: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.193:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32073&tries=1'] Namespace:nettest-9857 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:44:24.125: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:44:24.926: INFO: Waiting for responses: map[] Mar 25 10:44:24.926: INFO: reached 172.18.0.17 after 1/34 tries [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:44:24.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-9857" for this suite. • [SLOW TEST:121.166 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for multiple endpoint-Services with same selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector","total":54,"completed":22,"skipped":2761,"failed":2,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53 [BeforeEach] [sig-network] KubeProxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:44:25.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kube-proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should set TCP CLOSE_WAIT timeout [Privileged] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53 Mar 25 10:44:27.397: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:44:29.587: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:44:31.447: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:44:33.404: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:44:35.625: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:44:38.119: INFO: The status of Pod e2e-net-exec is Running (Ready = true) STEP: Launching a server daemon on node latest-worker2 (node ip: 172.18.0.15, image: k8s.gcr.io/e2e-test-images/agnhost:2.28) Mar 25 10:44:39.638: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:44:41.743: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:44:44.363: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:44:45.956: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:44:47.912: INFO: The status of Pod e2e-net-server is Running (Ready = true) STEP: Launching a client connection on node latest-worker (node ip: 172.18.0.17, image: k8s.gcr.io/e2e-test-images/agnhost:2.28) Mar 25 10:44:51.655: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:44:54.021: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:44:56.806: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:44:58.195: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:45:00.423: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:45:02.578: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:45:03.829: INFO: The status of Pod e2e-net-client is Running (Ready = true) STEP: Checking conntrack entries for the timeout Mar 25 10:45:03.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=kube-proxy-3766 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 172.18.0.15 | grep -m 1 'CLOSE_WAIT.*dport=11302' ' Mar 25 10:45:06.783: INFO: stderr: "+ conntrack -L -f ipv4 -d 172.18.0.15\n+ grep -m 1 CLOSE_WAIT.*dport=11302\nconntrack v1.4.5 (conntrack-tools): 1 flow entries have been shown.\n" Mar 25 10:45:06.783: INFO: stdout: "tcp 6 3592 CLOSE_WAIT src=10.244.2.199 dst=172.18.0.15 sport=52332 dport=11302 src=172.18.0.15 dst=172.18.0.17 sport=11302 dport=52332 [ASSURED] mark=0 use=1\n" Mar 25 10:45:06.783: INFO: conntrack entry for node 172.18.0.15 and port 11302: tcp 6 3592 CLOSE_WAIT src=10.244.2.199 dst=172.18.0.15 sport=52332 dport=11302 src=172.18.0.15 dst=172.18.0.17 sport=11302 dport=52332 [ASSURED] mark=0 use=1 [AfterEach] [sig-network] KubeProxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:45:06.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kube-proxy-3766" for this suite. • [SLOW TEST:41.561 seconds] [sig-network] KubeProxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should set TCP CLOSE_WAIT timeout [Privileged] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53 ------------------------------ {"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":54,"completed":23,"skipped":2819,"failed":2,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:341 [BeforeEach] Forward external name lookup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:45:07.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns-config-map STEP: Waiting for a default service account to be provisioned in namespace [It] should forward externalname lookup to upstream nameserver [Slow][Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:341 STEP: Finding a DNS pod Mar 25 10:45:10.863: INFO: Using DNS pod: coredns-74ff55c5b-pgfgz Mar 25 10:45:11.639: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-configmap-697079e5-532f-4fb1-a911-db0c5f461151 dns-config-map-6269 ca91d767-cb5e-4ca1-bb1b-39ca71b5f5be 1076938 0 2021-03-25 10:45:11 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-25 10:45:10 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":10101,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n5wzw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5wzw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:10101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n5wzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:45:24.387: INFO: Created service &Service{ObjectMeta:{e2e-dns-configmap dns-config-map-6269 fca2bf23-95b5-4e3e-b407-fad9277d5a37 1077040 0 2021-03-25 10:45:23 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-25 10:45:22 +0000 UTC FieldsV1 {"f:spec":{"f:ports":{".":{},"k:{\"port\":10101,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{".":{},"f:app":{}},"f:sessionAffinity":{},"f:type":{}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:10101,TargetPort:{0 10101 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: e2e-dns-configmap,},ClusterIP:10.96.45.44,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:nil,ClusterIPs:[10.96.45.44],IPFamilies:[],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} Mar 25 10:45:25.438: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-820c6970-c588-41bb-b430-f87cb6c211be dns-config-map-6269 bf79adc8-badf-4574-b66c-3f2ce3f8f3b5 1077047 0 2021-03-25 10:45:25 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-25 10:45:25 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-f4hss,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:default-token-n5wzw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n5wzw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:default-token-n5wzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 25 10:45:33.982: INFO: ExecWithOptions {Command:[dig +short dns-externalname-upstream-test.dns-config-map-6269.svc.cluster.local] Namespace:dns-config-map-6269 PodName:e2e-dns-configmap-697079e5-532f-4fb1-a911-db0c5f461151 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:45:33.982: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:45:34.503: INFO: Running dig: [dig +short dns-externalname-upstream-test.dns-config-map-6269.svc.cluster.local], stdout: "dns.google.\n8.8.4.4\n8.8.8.8", stderr: "", err: STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 { health ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } forward . 10.244.2.200 }] BinaryData:map[]} Mar 25 10:45:35.404: INFO: ExecWithOptions {Command:[dig +short dns-externalname-upstream-local.dns-config-map-6269.svc.cluster.local] Namespace:dns-config-map-6269 PodName:e2e-dns-configmap-697079e5-532f-4fb1-a911-db0c5f461151 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:45:35.404: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:45:50.748: INFO: Running dig: [dig +short dns-externalname-upstream-local.dns-config-map-6269.svc.cluster.local], stdout: ";; connection timed out; no servers could be reached", stderr: "", err: command terminated with exit code 9 Mar 25 10:45:51.748: INFO: ExecWithOptions {Command:[dig +short dns-externalname-upstream-local.dns-config-map-6269.svc.cluster.local] Namespace:dns-config-map-6269 PodName:e2e-dns-configmap-697079e5-532f-4fb1-a911-db0c5f461151 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:45:51.749: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:45:57.300: INFO: Running dig: [dig +short dns-externalname-upstream-local.dns-config-map-6269.svc.cluster.local], stdout: "foo.example.com.\n192.0.2.123", stderr: "", err: STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } ] BinaryData:map[]} STEP: deleting the test externalName service STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } ] BinaryData:map[]} [AfterEach] Forward external name lookup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:46:19.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-config-map-6269" for this suite. • [SLOW TEST:75.053 seconds] [sig-network] DNS configMap nameserver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Forward external name lookup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:338 should forward externalname lookup to upstream nameserver [Slow][Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:341 ------------------------------ {"msg":"PASSED [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]","total":54,"completed":24,"skipped":2841,"failed":2,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:46:22.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should update nodePort: http [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369 STEP: Performing setup for networking test in namespace nettest-5055 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 10:46:26.820: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 10:46:31.583: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:46:35.094: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:46:35.992: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:46:38.537: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:46:39.778: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:46:41.622: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:46:43.704: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:46:46.568: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:46:47.707: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:46:51.454: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:46:52.222: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:46:54.231: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:46:57.682: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:47:00.106: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:47:01.663: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:47:04.063: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 10:47:04.303: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 10:47:20.542: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 10:47:20.542: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 10:47:23.289: INFO: Service node-port-service in namespace nettest-5055 found. Mar 25 10:47:25.574: INFO: Service session-affinity-service in namespace nettest-5055 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 10:47:26.897: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 10:47:28.079: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(http) 172.18.0.17 (node) --> 172.18.0.17:32487 (nodeIP) and getting ALL host endpoints Mar 25 10:47:28.262: INFO: Going to poll 172.18.0.17 on port 32487 at least 0 times, with a maximum of 34 tries before failing Mar 25 10:47:28.526: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:47:28.526: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:47:29.287: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 25 10:47:31.389: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:47:31.389: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:47:32.522: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 25 10:47:34.664: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:47:34.664: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:47:37.372: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1] STEP: Deleting the node port access point STEP: dialing(http) 172.18.0.17 (node) --> 172.18.0.17:32487 (nodeIP) and getting ZERO host endpoints Mar 25 10:47:54.398: INFO: Going to poll 172.18.0.17 on port 32487 at least 34 times, with a maximum of 34 tries before failing Mar 25 10:47:54.448: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:47:54.448: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:47:54.594: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:47:54.594: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:47:56.671: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:47:56.671: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:47:56.800: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:47:56.800: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:47:58.869: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:47:58.869: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:47:59.092: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:47:59.092: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:01.765: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:01.765: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:02.368: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:02.368: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:04.987: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:04.987: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:06.778: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:06.778: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:09.267: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:09.268: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:09.688: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:09.688: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:11.836: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:11.836: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:12.257: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:12.258: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:14.333: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:14.333: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:14.863: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:14.863: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:16.993: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:16.993: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:17.884: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:17.884: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:20.185: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:20.185: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:20.561: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:20.561: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:23.274: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:23.274: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:23.927: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:23.927: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:26.169: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:26.169: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:26.556: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:26.556: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:29.195: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:29.195: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:30.580: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:30.580: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:33.751: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:33.751: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:34.307: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:34.307: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:36.869: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:36.869: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:37.804: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:37.805: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:40.680: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:40.680: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:40.963: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:40.963: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:43.076: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:43.076: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:44.440: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:44.440: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:46.945: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:46.945: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:48.045: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:48.045: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:51.447: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:51.447: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:51.895: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:51.895: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:54.532: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:54.532: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:55.258: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:55.258: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:48:57.660: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:48:57.660: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:48:58.091: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:48:58.091: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:49:00.655: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:49:00.655: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:49:01.045: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:49:01.046: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:49:03.305: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:49:03.305: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:49:04.063: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:49:04.063: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:49:06.123: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:49:06.123: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:49:06.293: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:49:06.293: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:49:08.346: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:49:08.346: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:49:08.431: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:49:08.431: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:49:10.681: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:49:10.681: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:49:10.972: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:49:10.972: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:49:13.657: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:49:13.657: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:49:14.366: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:49:14.366: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:49:16.675: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:49:16.675: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:49:17.403: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:49:17.403: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:49:20.546: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:49:20.546: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:49:21.740: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:49:21.740: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:49:24.118: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:49:24.118: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:49:24.558: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:49:24.558: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:49:26.990: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:49:26.990: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:49:27.715: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:49:27.715: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:49:30.035: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:49:30.036: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:49:30.727: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:49:30.727: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:49:32.881: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:49:32.882: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:49:33.119: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:49:33.119: INFO: Waiting for [] endpoints (expected=[], actual=[]) Mar 25 10:49:35.196: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\s*$'] Namespace:nettest-5055 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:49:35.197: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:49:35.450: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.17:32487/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: "" Mar 25 10:49:35.450: INFO: Found all 0 expected endpoints: [] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:49:35.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-5055" for this suite. • [SLOW TEST:193.449 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should update nodePort: http [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]","total":54,"completed":25,"skipped":2930,"failed":2,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]} SS ------------------------------ [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:49:35.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for endpoint-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242 STEP: Performing setup for networking test in namespace nettest-3679 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 10:49:37.548: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 10:49:39.790: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:49:42.521: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:49:43.867: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:49:46.221: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:49:48.049: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:49:50.235: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:49:52.535: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:49:54.508: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:49:56.437: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:49:58.389: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 10:49:58.627: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 10:50:01.064: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 10:50:02.701: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 25 10:50:04.861: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 10:50:13.422: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 10:50:13.422: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 10:50:21.822: INFO: Service node-port-service in namespace nettest-3679 found. Mar 25 10:50:24.399: INFO: Service session-affinity-service in namespace nettest-3679 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 10:50:25.419: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 10:50:26.947: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(http) netserver-0 (endpoint) --> 10.96.43.146:80 (config.clusterIP) Mar 25 10:50:27.525: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.218:8080/dial?request=hostname&protocol=http&host=10.96.43.146&port=80&tries=1'] Namespace:nettest-3679 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:50:27.525: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:50:27.808: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:50:29.859: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.218:8080/dial?request=hostname&protocol=http&host=10.96.43.146&port=80&tries=1'] Namespace:nettest-3679 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:50:29.859: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:50:30.015: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:50:32.196: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.218:8080/dial?request=hostname&protocol=http&host=10.96.43.146&port=80&tries=1'] Namespace:nettest-3679 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:50:32.196: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:50:32.410: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:50:34.765: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.218:8080/dial?request=hostname&protocol=http&host=10.96.43.146&port=80&tries=1'] Namespace:nettest-3679 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:50:34.765: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:50:35.288: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:50:37.466: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.218:8080/dial?request=hostname&protocol=http&host=10.96.43.146&port=80&tries=1'] Namespace:nettest-3679 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:50:37.466: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:50:37.905: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:50:40.180: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.218:8080/dial?request=hostname&protocol=http&host=10.96.43.146&port=80&tries=1'] Namespace:nettest-3679 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:50:40.180: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:50:40.475: INFO: Waiting for responses: map[] Mar 25 10:50:40.475: INFO: reached 10.96.43.146 after 5/34 tries STEP: dialing(http) netserver-0 (endpoint) --> 172.18.0.17:32276 (nodeIP) Mar 25 10:50:40.574: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.218:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32276&tries=1'] Namespace:nettest-3679 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:50:40.574: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:50:40.828: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:50:43.005: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.218:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32276&tries=1'] Namespace:nettest-3679 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:50:43.005: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:50:43.660: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:50:45.765: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.218:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32276&tries=1'] Namespace:nettest-3679 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:50:45.765: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:50:46.035: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:50:48.142: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.218:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32276&tries=1'] Namespace:nettest-3679 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:50:48.142: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:50:48.547: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:50:50.597: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.218:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32276&tries=1'] Namespace:nettest-3679 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:50:50.597: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:50:51.150: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:50:53.275: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.218:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32276&tries=1'] Namespace:nettest-3679 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:50:53.275: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:50:53.808: INFO: Waiting for responses: map[netserver-1:{}] Mar 25 10:50:55.893: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.218:8080/dial?request=hostname&protocol=http&host=172.18.0.17&port=32276&tries=1'] Namespace:nettest-3679 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:50:55.894: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:50:56.230: INFO: Waiting for responses: map[] Mar 25 10:50:56.230: INFO: reached 172.18.0.17 after 6/34 tries [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:50:56.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-3679" for this suite. • [SLOW TEST:81.524 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for endpoint-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http","total":54,"completed":26,"skipped":2932,"failed":2,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for node-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:50:57.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for node-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212 STEP: Performing setup for networking test in namespace nettest-1057 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 25 10:50:58.243: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 25 10:50:58.820: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:51:01.060: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:51:02.910: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:51:05.185: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 25 10:51:07.000: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:51:09.231: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:51:11.539: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:51:13.034: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:51:15.021: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:51:17.274: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:51:19.125: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:51:21.062: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:51:23.293: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 25 10:51:24.960: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 25 10:51:25.193: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 25 10:51:32.275: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 25 10:51:32.275: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 25 10:51:34.808: INFO: Service node-port-service in namespace nettest-1057 found. Mar 25 10:51:35.776: INFO: Service session-affinity-service in namespace nettest-1057 found. STEP: Waiting for NodePort service to expose endpoint Mar 25 10:51:36.915: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 25 10:51:37.972: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(udp) 172.18.0.17 (node) --> 10.96.93.16:90 (config.clusterIP) Mar 25 10:51:38.156: INFO: Going to poll 10.96.93.16 on port 90 at least 0 times, with a maximum of 34 tries before failing Mar 25 10:51:38.250: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.93.16 90 | grep -v '^\s*$'] Namespace:nettest-1057 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:51:38.250: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:51:39.614: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 25 10:51:42.325: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.93.16 90 | grep -v '^\s*$'] Namespace:nettest-1057 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:51:42.325: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:51:44.399: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 25 10:51:47.033: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.93.16 90 | grep -v '^\s*$'] Namespace:nettest-1057 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:51:47.033: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:51:49.529: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1] STEP: dialing(udp) 172.18.0.17 (node) --> 172.18.0.17:30325 (nodeIP) Mar 25 10:51:49.529: INFO: Going to poll 172.18.0.17 on port 30325 at least 0 times, with a maximum of 34 tries before failing Mar 25 10:51:49.659: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30325 | grep -v '^\s*$'] Namespace:nettest-1057 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:51:49.659: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:51:50.815: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 25 10:51:53.461: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.17 30325 | grep -v '^\s*$'] Namespace:nettest-1057 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 10:51:53.462: INFO: >>> kubeConfig: /root/.kube/config Mar 25 10:51:54.913: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 10:51:54.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-1057" for this suite. • [SLOW TEST:58.720 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for node-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for node-Service: udp","total":54,"completed":27,"skipped":3054,"failed":2,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 10:51:55.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91 Mar 25 10:51:56.720: INFO: (0) /api/v1/nodes/latest-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256
STEP: Performing setup for networking test in namespace nettest-7749
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 25 10:52:01.924: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 25 10:52:04.067: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:52:07.029: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:52:08.918: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:52:10.691: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:52:12.844: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:52:14.359: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 10:52:16.168: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 10:52:18.259: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 10:52:20.216: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 10:52:22.156: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 10:52:24.101: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 10:52:26.857: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 25 10:52:27.162: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 25 10:52:29.721: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 25 10:52:31.181: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 25 10:52:33.221: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 25 10:52:39.782: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 25 10:52:39.782: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 25 10:52:41.449: INFO: Service node-port-service in namespace nettest-7749 found.
Mar 25 10:52:42.893: INFO: Service session-affinity-service in namespace nettest-7749 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 25 10:52:44.070: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 25 10:52:45.591: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) netserver-0 (endpoint) --> 10.96.197.105:90 (config.clusterIP)
Mar 25 10:52:46.136: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.221:8080/dial?request=hostname&protocol=udp&host=10.96.197.105&port=90&tries=1'] Namespace:nettest-7749 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 10:52:46.136: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 10:52:47.427: INFO: Waiting for responses: map[netserver-1:{}]
Mar 25 10:52:49.467: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.221:8080/dial?request=hostname&protocol=udp&host=10.96.197.105&port=90&tries=1'] Namespace:nettest-7749 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 10:52:49.467: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 10:52:50.182: INFO: Waiting for responses: map[]
Mar 25 10:52:50.182: INFO: reached 10.96.197.105 after 1/34 tries
STEP: dialing(udp) netserver-0 (endpoint) --> 172.18.0.17:31916 (nodeIP)
Mar 25 10:52:50.782: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.221:8080/dial?request=hostname&protocol=udp&host=172.18.0.17&port=31916&tries=1'] Namespace:nettest-7749 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 10:52:50.782: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 10:52:51.654: INFO: Waiting for responses: map[netserver-0:{}]
Mar 25 10:52:54.510: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.221:8080/dial?request=hostname&protocol=udp&host=172.18.0.17&port=31916&tries=1'] Namespace:nettest-7749 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 10:52:54.510: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 10:52:55.459: INFO: Waiting for responses: map[]
Mar 25 10:52:55.459: INFO: reached 172.18.0.17 after 1/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 10:52:55.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7749" for this suite.

• [SLOW TEST:57.561 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: udp
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp","total":54,"completed":29,"skipped":3268,"failed":2,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] ESIPP [Slow] 
  should only target nodes with endpoints
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 10:52:57.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Mar 25 10:53:00.357: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 10:53:00.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-7048" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866

S [SKIPPING] in Spec Setup (BeforeEach) [4.502 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should only target nodes with endpoints [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 10:53:01.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-6196
Mar 25 10:53:05.127: INFO: Service Port TCP: 80
STEP: changing the TCP service to type=NodePort
STEP: creating replication controller nodeport-update-service in namespace services-6196
I0325 10:53:07.965848       7 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-6196, replica count: 2
I0325 10:53:11.017037       7 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 10:53:14.018023       7 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 10:53:17.018783       7 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 10:53:20.019603       7 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 10:53:23.020609       7 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 10:53:26.021082       7 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar 25 10:53:26.021: INFO: Creating new exec pod
E0325 10:53:37.880678       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 10:53:39.295383       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 10:53:42.151696       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 10:53:48.088204       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 10:53:59.118188       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 10:54:18.549745       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 10:55:06.931851       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
Mar 25 10:55:37.879: FAIL: Unexpected error:
    <*errors.errorString | 0xc002f8a020>: {
        s: "no subset of available IP address found for the endpoint nodeport-update-service within timeout 2m0s",
    }
    no subset of available IP address found for the endpoint nodeport-update-service within timeout 2m0s
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.13()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00294cf00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc00294cf00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc00294cf00, 0x6d60740)
	/usr/local/go/src/testing/testing.go:1194 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1239 +0x2b3
Mar 25 10:55:37.880: INFO: Cleaning up the updating NodePorts test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-6196".
STEP: Found 14 events.
Mar 25 10:55:40.038: INFO: At 2021-03-25 10:53:08 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-9rjf6
Mar 25 10:55:40.038: INFO: At 2021-03-25 10:53:08 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-7sbbz
Mar 25 10:55:40.038: INFO: At 2021-03-25 10:53:08 +0000 UTC - event for nodeport-update-service-7sbbz: {default-scheduler } Scheduled: Successfully assigned services-6196/nodeport-update-service-7sbbz to latest-worker
Mar 25 10:55:40.038: INFO: At 2021-03-25 10:53:08 +0000 UTC - event for nodeport-update-service-9rjf6: {default-scheduler } Scheduled: Successfully assigned services-6196/nodeport-update-service-9rjf6 to latest-worker2
Mar 25 10:55:40.038: INFO: At 2021-03-25 10:53:10 +0000 UTC - event for nodeport-update-service-7sbbz: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 25 10:55:40.038: INFO: At 2021-03-25 10:53:15 +0000 UTC - event for nodeport-update-service-9rjf6: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 25 10:55:40.038: INFO: At 2021-03-25 10:53:18 +0000 UTC - event for nodeport-update-service-7sbbz: {kubelet latest-worker} Created: Created container nodeport-update-service
Mar 25 10:55:40.038: INFO: At 2021-03-25 10:53:19 +0000 UTC - event for nodeport-update-service-7sbbz: {kubelet latest-worker} Started: Started container nodeport-update-service
Mar 25 10:55:40.038: INFO: At 2021-03-25 10:53:21 +0000 UTC - event for nodeport-update-service-9rjf6: {kubelet latest-worker2} Created: Created container nodeport-update-service
Mar 25 10:55:40.038: INFO: At 2021-03-25 10:53:23 +0000 UTC - event for nodeport-update-service-9rjf6: {kubelet latest-worker2} Started: Started container nodeport-update-service
Mar 25 10:55:40.038: INFO: At 2021-03-25 10:53:27 +0000 UTC - event for execpod4ktpw: {default-scheduler } Scheduled: Successfully assigned services-6196/execpod4ktpw to latest-worker2
Mar 25 10:55:40.038: INFO: At 2021-03-25 10:53:30 +0000 UTC - event for execpod4ktpw: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 25 10:55:40.038: INFO: At 2021-03-25 10:53:35 +0000 UTC - event for execpod4ktpw: {kubelet latest-worker2} Created: Created container agnhost-container
Mar 25 10:55:40.038: INFO: At 2021-03-25 10:53:36 +0000 UTC - event for execpod4ktpw: {kubelet latest-worker2} Started: Started container agnhost-container
Mar 25 10:55:40.205: INFO: POD                            NODE            PHASE    GRACE  CONDITIONS
Mar 25 10:55:40.205: INFO: execpod4ktpw                   latest-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:53:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:53:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:53:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:53:27 +0000 UTC  }]
Mar 25 10:55:40.205: INFO: nodeport-update-service-7sbbz  latest-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:53:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:53:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:53:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:53:08 +0000 UTC  }]
Mar 25 10:55:40.205: INFO: nodeport-update-service-9rjf6  latest-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:53:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:53:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:53:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:53:08 +0000 UTC  }]
Mar 25 10:55:40.205: INFO: 
Mar 25 10:55:40.271: INFO: 
Logging node info for node latest-control-plane
Mar 25 10:55:40.349: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane    cc9ffc7a-24ee-4720-b82b-ca49361a1767 1083137 0 2021-03-22 08:06:26 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:53:45 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:53:45 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:53:45 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:53:45 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 25 10:55:40.350: INFO: 
Logging kubelet events for node latest-control-plane
Mar 25 10:55:40.509: INFO: 
Logging pods the kubelet thinks is on node latest-control-plane
Mar 25 10:55:40.558: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:40.558: INFO: 	Container etcd ready: true, restart count 0
Mar 25 10:55:40.558: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:40.558: INFO: 	Container kube-controller-manager ready: true, restart count 0
Mar 25 10:55:40.558: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:40.558: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 25 10:55:40.558: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:40.558: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 25 10:55:40.558: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:40.558: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar 25 10:55:40.558: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:40.558: INFO: 	Container kube-scheduler ready: true, restart count 0
Mar 25 10:55:40.558: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:40.558: INFO: 	Container local-path-provisioner ready: true, restart count 0
W0325 10:55:40.598052       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 25 10:55:40.723: INFO: 
Latency metrics for node latest-control-plane
Mar 25 10:55:40.723: INFO: 
Logging node info for node latest-worker
Mar 25 10:55:40.798: INFO: Node Info: &Node{ObjectMeta:{latest-worker    d799492c-1b1f-4258-b431-31204511a98f 1081948 0 2021-03-22 08:06:55 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 09:46:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 10:23:27 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:52:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:52:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:52:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:52:05 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 25 10:55:40.799: INFO: 
Logging kubelet events for node latest-worker
Mar 25 10:55:40.845: INFO: 
Logging pods the kubelet thinks is on node latest-worker
Mar 25 10:55:40.925: INFO: kindnet-485hg started at 2021-03-25 10:20:57 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:40.925: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 25 10:55:40.925: INFO: affinity-clusterip-transition-nzkpv started at 2021-03-25 10:52:52 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:40.925: INFO: 	Container affinity-clusterip-transition ready: false, restart count 0
Mar 25 10:55:40.925: INFO: coredns-74ff55c5b-hm8x8 started at 2021-03-25 10:46:16 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:40.925: INFO: 	Container coredns ready: true, restart count 0
Mar 25 10:55:40.925: INFO: suspend-false-to-true-2l5xh started at 2021-03-25 10:47:59 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:40.925: INFO: 	Container c ready: true, restart count 0
Mar 25 10:55:40.925: INFO: coredns-74ff55c5b-fzmjd started at 2021-03-25 10:46:15 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:40.925: INFO: 	Container coredns ready: true, restart count 0
Mar 25 10:55:40.925: INFO: nodeport-update-service-7sbbz started at 2021-03-25 10:53:08 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:40.925: INFO: 	Container nodeport-update-service ready: true, restart count 0
Mar 25 10:55:40.925: INFO: suspend-false-to-true-bccbh started at 2021-03-25 10:47:59 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:40.925: INFO: 	Container c ready: true, restart count 0
Mar 25 10:55:40.925: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:40.925: INFO: 	Container kube-proxy ready: true, restart count 0
W0325 10:55:40.978508       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 25 10:55:41.240: INFO: 
Latency metrics for node latest-worker
Mar 25 10:55:41.240: INFO: 
Logging node info for node latest-worker2
Mar 25 10:55:41.296: INFO: Node Info: &Node{ObjectMeta:{latest-worker2    525d2fa2-95f1-4436-b726-c3866136dd3a 1082041 0 2021-03-22 08:06:55 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:56:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 09:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 10:33:09 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:52:15 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:52:15 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:52:15 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:52:15 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 25 10:55:41.297: INFO: 
Logging kubelet events for node latest-worker2
Mar 25 10:55:41.391: INFO: 
Logging pods the kubelet thinks is on node latest-worker2
Mar 25 10:55:41.475: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:41.475: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 25 10:55:41.475: INFO: affinity-clusterip-transition-bzvh6 started at 2021-03-25 10:52:52 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:41.475: INFO: 	Container affinity-clusterip-transition ready: false, restart count 0
Mar 25 10:55:41.475: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:41.475: INFO: 	Container volume-tester ready: false, restart count 0
Mar 25 10:55:41.475: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:41.475: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 25 10:55:41.475: INFO: nodeport-update-service-9rjf6 started at 2021-03-25 10:53:08 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:41.475: INFO: 	Container nodeport-update-service ready: true, restart count 0
Mar 25 10:55:41.475: INFO: execpod4ktpw started at 2021-03-25 10:53:27 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:41.475: INFO: 	Container agnhost-container ready: true, restart count 0
Mar 25 10:55:41.475: INFO: affinity-clusterip-transition-7w5n5 started at 2021-03-25 10:52:52 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:55:41.475: INFO: 	Container affinity-clusterip-transition ready: false, restart count 0
W0325 10:55:41.483294       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 25 10:55:41.665: INFO: 
Latency metrics for node latest-worker2
Mar 25 10:55:41.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6196" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

• Failure [160.156 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  Mar 25 10:55:37.879: Unexpected error:
      <*errors.errorString | 0xc002f8a020>: {
          s: "no subset of available IP address found for the endpoint nodeport-update-service within timeout 2m0s",
      }
      no subset of available IP address found for the endpoint nodeport-update-service within timeout 2m0s
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":54,"completed":29,"skipped":3741,"failed":3,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Firewall rule 
  control plane should not expose well-known ports
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 10:55:41.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Mar 25 10:55:42.141: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 10:55:42.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-338" for this suite.

S [SKIPPING] in Spec Setup (BeforeEach) [0.490 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  control plane should not expose well-known ports [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Netpol API 
  should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48
[BeforeEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 10:55:42.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename netpol
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Mar 25 10:55:42.768: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Mar 25 10:55:42.936: INFO: starting watch
STEP: patching
STEP: updating
Mar 25 10:55:43.393: INFO: waiting for watch events with expected annotations
Mar 25 10:55:43.394: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Mar 25 10:55:43.394: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 10:55:44.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-8932" for this suite.
•{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":54,"completed":30,"skipped":4004,"failed":3,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking 
  should provide unchanging, static URL paths for kubernetes api services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 10:55:45.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112
STEP: testing: /healthz
STEP: testing: /api
STEP: testing: /apis
STEP: testing: /metrics
STEP: testing: /openapi/v2
STEP: testing: /version
STEP: testing: /logs
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 10:55:47.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6422" for this suite.
•{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":54,"completed":31,"skipped":4178,"failed":3,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Conntrack 
  should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 10:55:47.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
STEP: creating a UDP service svc-udp with type=NodePort in conntrack-5936
STEP: creating a client pod for probing the service svc-udp
Mar 25 10:55:49.847: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:55:52.195: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:55:53.961: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:55:56.050: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:55:57.902: INFO: The status of Pod pod-client is Running (Ready = true)
Mar 25 10:55:59.153: INFO: Pod client logs: Thu Mar 25 10:55:56 UTC 2021
Thu Mar 25 10:55:56 UTC 2021 Try: 1

Thu Mar 25 10:55:56 UTC 2021 Try: 2

Thu Mar 25 10:55:56 UTC 2021 Try: 3

Thu Mar 25 10:55:56 UTC 2021 Try: 4

Thu Mar 25 10:55:56 UTC 2021 Try: 5

Thu Mar 25 10:55:56 UTC 2021 Try: 6

Thu Mar 25 10:55:56 UTC 2021 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Mar 25 10:55:59.603: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:56:02.487: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:56:03.705: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:56:05.808: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-5936 to expose endpoints map[pod-server-1:[80]]
Mar 25 10:56:06.358: INFO: successfully validated that service svc-udp in namespace conntrack-5936 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 172.18.0.15
STEP: creating a second backend pod pod-server-2 for the service svc-udp
Mar 25 10:56:16.949: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:56:19.110: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:56:21.183: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:56:23.064: INFO: The status of Pod pod-server-2 is Running (Ready = true)
Mar 25 10:56:23.066: INFO: Cleaning up pod-server-1 pod
Mar 25 10:56:24.056: INFO: Waiting for pod pod-server-1 to disappear
Mar 25 10:56:24.315: INFO: Pod pod-server-1 no longer exists
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-5936 to expose endpoints map[pod-server-2:[80]]
Mar 25 10:56:24.661: INFO: successfully validated that service svc-udp in namespace conntrack-5936 exposes endpoints map[pod-server-2:[80]]
STEP: checking client pod connected to the backend 2 on Node IP 172.18.0.15
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 10:56:35.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-5936" for this suite.

• [SLOW TEST:47.765 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":54,"completed":32,"skipped":4285,"failed":3,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should be able to handle large requests: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 10:56:35.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
STEP: Performing setup for networking test in namespace nettest-5678
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 25 10:56:37.135: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 25 10:56:38.026: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:56:40.240: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:56:43.287: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:56:44.241: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:56:46.704: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 10:56:48.067: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 10:56:50.647: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 10:56:52.734: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 10:56:54.370: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 10:56:56.654: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 10:56:58.860: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 25 10:57:00.242: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 25 10:57:09.675: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 25 10:57:09.675: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 25 10:57:12.753: INFO: Service node-port-service in namespace nettest-5678 found.
Mar 25 10:57:16.237: INFO: Service session-affinity-service in namespace nettest-5678 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 25 10:57:17.972: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 25 10:57:19.681: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) test-container-pod --> 10.96.252.12:90 (config.clusterIP)
Mar 25 10:57:19.969: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.145:9080/dial?request=echo%20nooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo&protocol=udp&host=10.96.252.12&port=90&tries=1'] Namespace:nettest-5678 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 10:57:19.969: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 10:57:20.152: INFO: Waiting for responses: map[]
Mar 25 10:57:20.152: INFO: reached 10.96.252.12 after 0/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 10:57:20.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5678" for this suite.

• [SLOW TEST:45.002 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: udp
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp","total":54,"completed":33,"skipped":4443,"failed":3,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] ESIPP [Slow] 
  should work from pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 10:57:20.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Mar 25 10:57:21.668: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 10:57:21.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-3578" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866

S [SKIPPING] in Spec Setup (BeforeEach) [2.134 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work from pods [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 10:57:22.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-8382
Mar 25 10:57:24.182: INFO: hairpin-test cluster ip: 10.96.95.16
STEP: creating a client/server pod
Mar 25 10:57:24.554: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:57:26.818: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:57:28.657: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Mar 25 10:57:30.687: INFO: The status of Pod hairpin is Running (Ready = true)
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-8382 to expose endpoints map[hairpin:[8080]]
Mar 25 10:57:30.911: INFO: successfully validated that service hairpin-test in namespace services-8382 exposes endpoints map[hairpin:[8080]]
STEP: Checking if the pod can reach itself
E0325 10:57:30.912637       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 10:57:32.165438       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 10:57:34.529208       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 10:57:40.278691       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 10:57:48.947708       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 10:58:05.561082       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
E0325 10:58:35.857196       7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource
Mar 25 10:59:30.911: FAIL: Unexpected error:
    <*errors.errorString | 0xc003aae010>: {
        s: "no subset of available IP address found for the endpoint hairpin-test within timeout 2m0s",
    }
    no subset of available IP address found for the endpoint hairpin-test within timeout 2m0s
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.7()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1012 +0x6a5
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00294cf00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc00294cf00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc00294cf00, 0x6d60740)
	/usr/local/go/src/testing/testing.go:1194 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1239 +0x2b3
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-8382".
STEP: Found 4 events.
Mar 25 10:59:31.046: INFO: At 2021-03-25 10:57:24 +0000 UTC - event for hairpin: {default-scheduler } Scheduled: Successfully assigned services-8382/hairpin to latest-worker
Mar 25 10:59:31.046: INFO: At 2021-03-25 10:57:26 +0000 UTC - event for hairpin: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 25 10:59:31.046: INFO: At 2021-03-25 10:57:29 +0000 UTC - event for hairpin: {kubelet latest-worker} Created: Created container agnhost-container
Mar 25 10:59:31.047: INFO: At 2021-03-25 10:57:29 +0000 UTC - event for hairpin: {kubelet latest-worker} Started: Started container agnhost-container
Mar 25 10:59:31.245: INFO: POD      NODE           PHASE    GRACE  CONDITIONS
Mar 25 10:59:31.245: INFO: hairpin  latest-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:57:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:57:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:57:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 10:57:24 +0000 UTC  }]
Mar 25 10:59:31.245: INFO: 
Mar 25 10:59:31.269: INFO: 
Logging node info for node latest-control-plane
Mar 25 10:59:31.465: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane    cc9ffc7a-24ee-4720-b82b-ca49361a1767 1086286 0 2021-03-22 08:06:26 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:58:46 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:58:46 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:58:46 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:58:46 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 25 10:59:31.465: INFO: 
Logging kubelet events for node latest-control-plane
Mar 25 10:59:31.566: INFO: 
Logging pods the kubelet thinks is on node latest-control-plane
Mar 25 10:59:31.662: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:31.662: INFO: 	Container kube-controller-manager ready: true, restart count 0
Mar 25 10:59:31.662: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:31.662: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 25 10:59:31.662: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:31.662: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 25 10:59:31.662: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:31.662: INFO: 	Container etcd ready: true, restart count 0
Mar 25 10:59:31.662: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:31.662: INFO: 	Container kube-scheduler ready: true, restart count 0
Mar 25 10:59:31.662: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:31.662: INFO: 	Container local-path-provisioner ready: true, restart count 0
Mar 25 10:59:31.662: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:31.663: INFO: 	Container kube-apiserver ready: true, restart count 0
W0325 10:59:31.827562       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 25 10:59:31.965: INFO: 
Latency metrics for node latest-control-plane
Mar 25 10:59:31.965: INFO: 
Logging node info for node latest-worker
Mar 25 10:59:32.004: INFO: Node Info: &Node{ObjectMeta:{latest-worker    d799492c-1b1f-4258-b431-31204511a98f 1085836 0 2021-03-22 08:06:55 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {e2e.test Update v1 2021-03-25 10:23:27 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}} {kubelet Update v1 2021-03-25 10:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:58:06 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:58:06 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:58:06 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:58:06 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 25 10:59:32.005: INFO: 
Logging kubelet events for node latest-worker
Mar 25 10:59:32.037: INFO: 
Logging pods the kubelet thinks is on node latest-worker
Mar 25 10:59:32.226: INFO: suspend-false-to-true-2l5xh started at 2021-03-25 10:47:59 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:32.226: INFO: 	Container c ready: true, restart count 0
Mar 25 10:59:32.226: INFO: coredns-74ff55c5b-fzmjd started at 2021-03-25 10:46:15 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:32.226: INFO: 	Container coredns ready: true, restart count 0
Mar 25 10:59:32.226: INFO: suspend-false-to-true-bccbh started at 2021-03-25 10:47:59 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:32.226: INFO: 	Container c ready: true, restart count 0
Mar 25 10:59:32.226: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:32.226: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 25 10:59:32.226: INFO: kindnet-485hg started at 2021-03-25 10:20:57 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:32.226: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 25 10:59:32.226: INFO: hairpin started at 2021-03-25 10:57:24 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:32.226: INFO: 	Container agnhost-container ready: true, restart count 0
Mar 25 10:59:32.226: INFO: coredns-74ff55c5b-hm8x8 started at 2021-03-25 10:46:16 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:32.226: INFO: 	Container coredns ready: true, restart count 0
W0325 10:59:32.408675       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 25 10:59:32.813: INFO: 
Latency metrics for node latest-worker
Mar 25 10:59:32.813: INFO: 
Logging node info for node latest-worker2
Mar 25 10:59:32.830: INFO: Node Info: &Node{ObjectMeta:{latest-worker2    525d2fa2-95f1-4436-b726-c3866136dd3a 1085830 0 2021-03-22 08:06:55 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:56:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {e2e.test Update v1 2021-03-25 10:56:41 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-03-25 10:58:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 10:58:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 10:58:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 10:58:05 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 10:58:05 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 25 10:59:32.831: INFO: 
Logging kubelet events for node latest-worker2
Mar 25 10:59:32.928: INFO: 
Logging pods the kubelet thinks is on node latest-worker2
Mar 25 10:59:33.012: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:33.012: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 25 10:59:33.012: INFO: ss-0 started at 2021-03-25 10:59:24 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:33.012: INFO: 	Container webserver ready: true, restart count 0
Mar 25 10:59:33.012: INFO: kube-proxy-mode-detector started at 2021-03-25 10:59:17 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:33.012: INFO: 	Container agnhost-container ready: true, restart count 0
Mar 25 10:59:33.012: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:33.012: INFO: 	Container volume-tester ready: false, restart count 0
Mar 25 10:59:33.012: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded)
Mar 25 10:59:33.012: INFO: 	Container kube-proxy ready: true, restart count 0
W0325 10:59:33.397137       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 25 10:59:34.498: INFO: 
Latency metrics for node latest-worker2
Mar 25 10:59:34.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8382" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

• Failure [132.103 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986

  Mar 25 10:59:30.911: Unexpected error:
      <*errors.errorString | 0xc003aae010>: {
          s: "no subset of available IP address found for the endpoint hairpin-test within timeout 2m0s",
      }
      no subset of available IP address found for the endpoint hairpin-test within timeout 2m0s
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1012
------------------------------
{"msg":"FAILED [sig-network] Services should allow pods to hairpin back to themselves through services","total":54,"completed":33,"skipped":5060,"failed":4,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should allow pods to hairpin back to themselves through services"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 10:59:34.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
STEP: creating up-down-1 in namespace services-3437
STEP: creating service up-down-1 in namespace services-3437
STEP: creating replication controller up-down-1 in namespace services-3437
I0325 10:59:37.347757       7 runners.go:190] Created replication controller with name: up-down-1, namespace: services-3437, replica count: 3
I0325 10:59:40.398531       7 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 10:59:43.399624       7 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 10:59:46.400451       7 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 10:59:49.402118       7 runners.go:190] up-down-1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 10:59:52.402518       7 runners.go:190] up-down-1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 10:59:55.403328       7 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating up-down-2 in namespace services-3437
STEP: creating service up-down-2 in namespace services-3437
STEP: creating replication controller up-down-2 in namespace services-3437
I0325 10:59:55.929899       7 runners.go:190] Created replication controller with name: up-down-2, namespace: services-3437, replica count: 3
I0325 10:59:58.981049       7 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 11:00:01.982010       7 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 11:00:04.983222       7 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-1 is up
Mar 25 11:00:05.090: INFO: Creating new host exec pod
Mar 25 11:00:05.745: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:00:07.785: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:00:09.833: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:00:11.795: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Mar 25 11:00:11.795: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Mar 25 11:00:21.006: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.95.129:80 2>&1 || true; echo; done" in pod services-3437/verify-service-up-host-exec-pod
Mar 25 11:00:21.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-3437 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.95.129:80 2>&1 || true; echo; done'
Mar 25 11:00:29.986: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n"
Mar 25 11:00:29.987: INFO: stdout: "up-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-ghz4v\n"
Mar 25 11:00:29.987: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.95.129:80 2>&1 || true; echo; done" in pod services-3437/verify-service-up-exec-pod-zv6n2
Mar 25 11:00:29.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-3437 exec verify-service-up-exec-pod-zv6n2 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.95.129:80 2>&1 || true; echo; done'
Mar 25 11:00:30.445: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.95.129:80\n+ echo\n"
Mar 25 11:00:30.445: INFO: stdout: "up-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-g8gv6\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-ghz4v\nup-down-1-rq64l\nup-down-1-g8gv6\nup-down-1-rq64l\nup-down-1-ghz4v\nup-down-1-rq64l\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3437
STEP: Deleting pod verify-service-up-exec-pod-zv6n2 in namespace services-3437
STEP: verifying service up-down-2 is up
Mar 25 11:00:31.560: INFO: Creating new host exec pod
Mar 25 11:00:31.958: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:00:34.304: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:00:36.036: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Mar 25 11:00:36.036: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Mar 25 11:00:42.334: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.37.137:80 2>&1 || true; echo; done" in pod services-3437/verify-service-up-host-exec-pod
Mar 25 11:00:42.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-3437 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.37.137:80 2>&1 || true; echo; done'
Mar 25 11:00:42.862: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n"
Mar 25 11:00:42.862: INFO: stdout: "up-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\n"
Mar 25 11:00:42.863: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.37.137:80 2>&1 || true; echo; done" in pod services-3437/verify-service-up-exec-pod-8ckcs
Mar 25 11:00:42.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-3437 exec verify-service-up-exec-pod-8ckcs -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.37.137:80 2>&1 || true; echo; done'
Mar 25 11:00:43.380: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n"
Mar 25 11:00:43.381: INFO: stdout: "up-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3437
STEP: Deleting pod verify-service-up-exec-pod-8ckcs in namespace services-3437
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-3437, will wait for the garbage collector to delete the pods
Mar 25 11:00:45.678: INFO: Deleting ReplicationController up-down-1 took: 675.946731ms
Mar 25 11:00:46.478: INFO: Terminating ReplicationController up-down-1 pods took: 800.169831ms
STEP: verifying service up-down-1 is not up
Mar 25 11:02:00.258: INFO: Creating new host exec pod
Mar 25 11:02:01.641: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:02:04.179: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:02:05.892: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:02:07.784: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Mar 25 11:02:07.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-3437 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.95.129:80 && echo service-down-failed'
Mar 25 11:02:10.233: INFO: rc: 28
Mar 25 11:02:10.233: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.95.129:80 && echo service-down-failed" in pod services-3437/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-3437 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.95.129:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.96.95.129:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3437
STEP: verifying service up-down-2 is still up
Mar 25 11:02:10.844: INFO: Creating new host exec pod
Mar 25 11:02:11.718: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:02:13.749: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:02:15.774: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:02:17.822: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:02:20.025: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Mar 25 11:02:20.025: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Mar 25 11:02:30.414: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.37.137:80 2>&1 || true; echo; done" in pod services-3437/verify-service-up-host-exec-pod
Mar 25 11:02:30.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-3437 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.37.137:80 2>&1 || true; echo; done'
Mar 25 11:02:31.455: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n"
Mar 25 11:02:31.455: INFO: stdout: "up-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\n"
Mar 25 11:02:31.456: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.37.137:80 2>&1 || true; echo; done" in pod services-3437/verify-service-up-exec-pod-xftr2
Mar 25 11:02:31.456: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-3437 exec verify-service-up-exec-pod-xftr2 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.37.137:80 2>&1 || true; echo; done'
Mar 25 11:02:32.174: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n"
Mar 25 11:02:32.174: INFO: stdout: "up-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3437
STEP: Deleting pod verify-service-up-exec-pod-xftr2 in namespace services-3437
STEP: creating service up-down-3 in namespace services-3437
STEP: creating service up-down-3 in namespace services-3437
STEP: creating replication controller up-down-3 in namespace services-3437
I0325 11:02:34.160343       7 runners.go:190] Created replication controller with name: up-down-3, namespace: services-3437, replica count: 3
I0325 11:02:37.212300       7 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 11:02:40.213393       7 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 11:02:43.214496       7 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0325 11:02:46.214765       7 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-2 is still up
Mar 25 11:02:46.257: INFO: Creating new host exec pod
Mar 25 11:02:46.344: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:02:49.057: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:02:50.731: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:02:52.419: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:02:54.622: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Mar 25 11:02:54.623: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Mar 25 11:03:03.023: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.37.137:80 2>&1 || true; echo; done" in pod services-3437/verify-service-up-host-exec-pod
Mar 25 11:03:03.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-3437 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.37.137:80 2>&1 || true; echo; done'
Mar 25 11:03:04.072: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n"
Mar 25 11:03:04.073: INFO: stdout: "up-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\n"
Mar 25 11:03:04.073: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.37.137:80 2>&1 || true; echo; done" in pod services-3437/verify-service-up-exec-pod-bzkb9
Mar 25 11:03:04.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-3437 exec verify-service-up-exec-pod-bzkb9 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.37.137:80 2>&1 || true; echo; done'
Mar 25 11:03:04.580: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.37.137:80\n+ echo\n"
Mar 25 11:03:04.581: INFO: stdout: "up-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-ck4c7\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-ck4c7\nup-down-2-ck4c7\nup-down-2-qn8bh\nup-down-2-5h9j6\nup-down-2-qn8bh\nup-down-2-qn8bh\nup-down-2-ck4c7\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3437
STEP: Deleting pod verify-service-up-exec-pod-bzkb9 in namespace services-3437
STEP: verifying service up-down-3 is up
Mar 25 11:03:05.482: INFO: Creating new host exec pod
Mar 25 11:03:06.603: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:03:09.006: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:03:10.790: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:03:12.701: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:03:14.899: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Mar 25 11:03:14.899: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Mar 25 11:03:24.102: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.69.123:80 2>&1 || true; echo; done" in pod services-3437/verify-service-up-host-exec-pod
Mar 25 11:03:24.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-3437 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.69.123:80 2>&1 || true; echo; done'
Mar 25 11:03:25.641: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n"
Mar 25 11:03:25.641: INFO: stdout: "up-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-5zg6j\n"
Mar 25 11:03:25.641: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.69.123:80 2>&1 || true; echo; done" in pod services-3437/verify-service-up-exec-pod-5pgqn
Mar 25 11:03:25.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-3437 exec verify-service-up-exec-pod-5pgqn -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.69.123:80 2>&1 || true; echo; done'
Mar 25 11:03:26.954: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.69.123:80\n+ echo\n"
Mar 25 11:03:26.954: INFO: stdout: "up-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-5zg6j\nup-down-3-95k8k\nup-down-3-mz7x4\nup-down-3-5zg6j\nup-down-3-mz7x4\nup-down-3-mz7x4\nup-down-3-95k8k\nup-down-3-95k8k\nup-down-3-95k8k\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3437
STEP: Deleting pod verify-service-up-exec-pod-5pgqn in namespace services-3437
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 11:03:30.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3437" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

• [SLOW TEST:236.833 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":54,"completed":34,"skipped":5143,"failed":4,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should allow pods to hairpin back to themselves through services"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 11:03:31.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8812.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8812.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8812.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8812.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8812.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8812.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 25 11:04:11.617: INFO: DNS probes using dns-8812/dns-test-226241b2-8c16-4a4a-aa6c-9bc8606a8bda succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 11:04:12.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8812" for this suite.

• [SLOW TEST:42.466 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":54,"completed":35,"skipped":5233,"failed":4,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should allow pods to hairpin back to themselves through services"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should function for pod-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 11:04:14.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
STEP: Performing setup for networking test in namespace nettest-6719
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 25 11:04:17.837: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 25 11:04:21.344: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:04:23.627: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:04:25.943: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:04:28.451: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:04:29.707: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:04:31.499: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:04:34.217: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:04:35.995: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:04:38.056: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:04:40.043: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:04:42.005: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:04:43.618: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:04:45.551: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 25 11:04:46.339: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 25 11:04:48.961: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 25 11:05:05.613: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 25 11:05:05.613: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 25 11:05:12.706: INFO: Service node-port-service in namespace nettest-6719 found.
Mar 25 11:05:19.495: INFO: Service session-affinity-service in namespace nettest-6719 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 25 11:05:21.213: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 25 11:05:22.720: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) test-container-pod --> 10.96.79.96:90 (config.clusterIP)
Mar 25 11:05:23.276: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.246:9080/dial?request=hostname&protocol=udp&host=10.96.79.96&port=90&tries=1'] Namespace:nettest-6719 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 11:05:23.276: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 11:05:24.402: INFO: Waiting for responses: map[netserver-0:{}]
Mar 25 11:05:27.590: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.246:9080/dial?request=hostname&protocol=udp&host=10.96.79.96&port=90&tries=1'] Namespace:nettest-6719 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 11:05:27.590: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 11:05:29.920: INFO: Waiting for responses: map[netserver-0:{}]
Mar 25 11:05:32.548: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.246:9080/dial?request=hostname&protocol=udp&host=10.96.79.96&port=90&tries=1'] Namespace:nettest-6719 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 11:05:32.548: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 11:05:34.287: INFO: Waiting for responses: map[netserver-0:{}]
Mar 25 11:05:37.764: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.246:9080/dial?request=hostname&protocol=udp&host=10.96.79.96&port=90&tries=1'] Namespace:nettest-6719 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 11:05:37.764: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 11:05:41.490: INFO: Waiting for responses: map[]
Mar 25 11:05:41.490: INFO: reached 10.96.79.96 after 3/34 tries
STEP: dialing(udp) test-container-pod --> 172.18.0.17:32680 (nodeIP)
Mar 25 11:05:42.296: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.246:9080/dial?request=hostname&protocol=udp&host=172.18.0.17&port=32680&tries=1'] Namespace:nettest-6719 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 11:05:42.297: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 11:05:44.580: INFO: Waiting for responses: map[netserver-0:{}]
Mar 25 11:05:47.321: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.246:9080/dial?request=hostname&protocol=udp&host=172.18.0.17&port=32680&tries=1'] Namespace:nettest-6719 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 11:05:47.321: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 11:05:48.871: INFO: Waiting for responses: map[]
Mar 25 11:05:48.871: INFO: reached 172.18.0.17 after 1/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 11:05:48.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6719" for this suite.

• [SLOW TEST:95.323 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: udp
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: udp","total":54,"completed":36,"skipped":5388,"failed":4,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should allow pods to hairpin back to themselves through services"]}
[sig-network] Conntrack 
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 11:05:49.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
Mar 25 11:05:50.428: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:05:53.159: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:05:54.906: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:05:56.973: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:05:58.697: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:06:00.982: INFO: The status of Pod boom-server is Running (Ready = true)
STEP: Server pod created on node latest-worker2
STEP: Server service created
Mar 25 11:06:03.845: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:06:06.483: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:06:07.962: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:06:09.948: INFO: The status of Pod startup-script is Running (Ready = true)
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
Mar 25 11:07:10.399: INFO: boom-server pod logs: 2021/03/25 11:05:57 external ip: 10.244.1.200
2021/03/25 11:05:57 listen on 0.0.0.0:9000
2021/03/25 11:05:57 probing 10.244.1.200
2021/03/25 11:06:11 tcp packet: &{SrcPort:39841 DestPort:9000 Seq:1350123983 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:11 tcp packet: &{SrcPort:39841 DestPort:9000 Seq:1350123984 Ack:707144338 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:11 connection established
2021/03/25 11:06:11 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 155 161 42 36 163 242 80 121 65 208 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:11 checksumer: &{sum:527734 oddByte:33 length:39}
2021/03/25 11:06:11 ret:  527767
2021/03/25 11:06:11 ret:  3487
2021/03/25 11:06:11 ret:  3487
2021/03/25 11:06:11 boom packet injected
2021/03/25 11:06:11 tcp packet: &{SrcPort:39841 DestPort:9000 Seq:1350123984 Ack:707144338 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:13 tcp packet: &{SrcPort:34709 DestPort:9000 Seq:879129942 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:13 tcp packet: &{SrcPort:34709 DestPort:9000 Seq:879129943 Ack:818237500 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:13 connection established
2021/03/25 11:06:13 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 135 149 48 195 201 156 52 102 117 87 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:13 checksumer: &{sum:507558 oddByte:33 length:39}
2021/03/25 11:06:13 ret:  507591
2021/03/25 11:06:13 ret:  48846
2021/03/25 11:06:13 ret:  48846
2021/03/25 11:06:13 boom packet injected
2021/03/25 11:06:13 tcp packet: &{SrcPort:34709 DestPort:9000 Seq:879129943 Ack:818237500 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:15 tcp packet: &{SrcPort:34871 DestPort:9000 Seq:2505797550 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:15 tcp packet: &{SrcPort:34871 DestPort:9000 Seq:2505797551 Ack:3471888130 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:15 connection established
2021/03/25 11:06:15 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 136 55 206 239 72 98 149 91 111 175 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:15 checksumer: &{sum:499743 oddByte:33 length:39}
2021/03/25 11:06:15 ret:  499776
2021/03/25 11:06:15 ret:  41031
2021/03/25 11:06:15 ret:  41031
2021/03/25 11:06:15 boom packet injected
2021/03/25 11:06:15 tcp packet: &{SrcPort:34871 DestPort:9000 Seq:2505797551 Ack:3471888130 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:17 tcp packet: &{SrcPort:46651 DestPort:9000 Seq:614375539 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:17 tcp packet: &{SrcPort:46651 DestPort:9000 Seq:614375540 Ack:2366249959 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:17 connection established
2021/03/25 11:06:17 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 182 59 141 8 149 71 36 158 160 116 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:17 checksumer: &{sum:436761 oddByte:33 length:39}
2021/03/25 11:06:17 ret:  436794
2021/03/25 11:06:17 ret:  43584
2021/03/25 11:06:17 ret:  43584
2021/03/25 11:06:17 boom packet injected
2021/03/25 11:06:17 tcp packet: &{SrcPort:46651 DestPort:9000 Seq:614375540 Ack:2366249959 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:19 tcp packet: &{SrcPort:40837 DestPort:9000 Seq:797220978 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:19 tcp packet: &{SrcPort:40837 DestPort:9000 Seq:797220979 Ack:1287392316 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:19 connection established
2021/03/25 11:06:19 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 159 133 76 186 133 156 47 132 160 115 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:19 checksumer: &{sum:516028 oddByte:33 length:39}
2021/03/25 11:06:19 ret:  516061
2021/03/25 11:06:19 ret:  57316
2021/03/25 11:06:19 ret:  57316
2021/03/25 11:06:19 boom packet injected
2021/03/25 11:06:19 tcp packet: &{SrcPort:40837 DestPort:9000 Seq:797220979 Ack:1287392316 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:21 tcp packet: &{SrcPort:39841 DestPort:9000 Seq:1350123985 Ack:707144339 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:21 tcp packet: &{SrcPort:42273 DestPort:9000 Seq:3570117564 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:21 tcp packet: &{SrcPort:42273 DestPort:9000 Seq:3570117565 Ack:3756595774 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:21 connection established
2021/03/25 11:06:21 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 165 33 223 231 147 158 212 203 171 189 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:21 checksumer: &{sum:539923 oddByte:33 length:39}
2021/03/25 11:06:21 ret:  539956
2021/03/25 11:06:21 ret:  15676
2021/03/25 11:06:21 ret:  15676
2021/03/25 11:06:21 boom packet injected
2021/03/25 11:06:21 tcp packet: &{SrcPort:42273 DestPort:9000 Seq:3570117565 Ack:3756595774 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:23 tcp packet: &{SrcPort:34709 DestPort:9000 Seq:879129944 Ack:818237501 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:23 tcp packet: &{SrcPort:43495 DestPort:9000 Seq:915054964 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:23 tcp packet: &{SrcPort:43495 DestPort:9000 Seq:915054965 Ack:352566917 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:23 connection established
2021/03/25 11:06:23 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 169 231 21 2 55 229 54 138 161 117 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:23 checksumer: &{sum:514633 oddByte:33 length:39}
2021/03/25 11:06:23 ret:  514666
2021/03/25 11:06:23 ret:  55921
2021/03/25 11:06:23 ret:  55921
2021/03/25 11:06:23 boom packet injected
2021/03/25 11:06:23 tcp packet: &{SrcPort:43495 DestPort:9000 Seq:915054965 Ack:352566917 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:25 tcp packet: &{SrcPort:34871 DestPort:9000 Seq:2505797552 Ack:3471888131 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:25 tcp packet: &{SrcPort:45171 DestPort:9000 Seq:2903237653 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:25 tcp packet: &{SrcPort:45171 DestPort:9000 Seq:2903237654 Ack:1123565436 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:25 connection established
2021/03/25 11:06:25 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 176 115 66 246 184 220 173 11 228 22 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:25 checksumer: &{sum:488632 oddByte:33 length:39}
2021/03/25 11:06:25 ret:  488665
2021/03/25 11:06:25 ret:  29920
2021/03/25 11:06:25 ret:  29920
2021/03/25 11:06:25 boom packet injected
2021/03/25 11:06:25 tcp packet: &{SrcPort:45171 DestPort:9000 Seq:2903237654 Ack:1123565436 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:27 tcp packet: &{SrcPort:46651 DestPort:9000 Seq:614375541 Ack:2366249960 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:27 tcp packet: &{SrcPort:35011 DestPort:9000 Seq:851430986 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:27 tcp packet: &{SrcPort:35011 DestPort:9000 Seq:851430987 Ack:2648968024 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:27 connection established
2021/03/25 11:06:27 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 136 195 157 226 132 184 50 191 206 75 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:27 checksumer: &{sum:554278 oddByte:33 length:39}
2021/03/25 11:06:27 ret:  554311
2021/03/25 11:06:27 ret:  30031
2021/03/25 11:06:27 ret:  30031
2021/03/25 11:06:27 boom packet injected
2021/03/25 11:06:27 tcp packet: &{SrcPort:35011 DestPort:9000 Seq:851430987 Ack:2648968024 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:29 tcp packet: &{SrcPort:40837 DestPort:9000 Seq:797220980 Ack:1287392317 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:29 tcp packet: &{SrcPort:34429 DestPort:9000 Seq:599992691 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:29 tcp packet: &{SrcPort:34429 DestPort:9000 Seq:599992692 Ack:1011425030 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:29 connection established
2021/03/25 11:06:29 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 134 125 60 71 152 102 35 195 41 116 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:29 checksumer: &{sum:486947 oddByte:33 length:39}
2021/03/25 11:06:29 ret:  486980
2021/03/25 11:06:29 ret:  28235
2021/03/25 11:06:29 ret:  28235
2021/03/25 11:06:29 boom packet injected
2021/03/25 11:06:29 tcp packet: &{SrcPort:34429 DestPort:9000 Seq:599992692 Ack:1011425030 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:31 tcp packet: &{SrcPort:42273 DestPort:9000 Seq:3570117566 Ack:3756595775 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:31 tcp packet: &{SrcPort:35743 DestPort:9000 Seq:4110878774 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:31 tcp packet: &{SrcPort:35743 DestPort:9000 Seq:4110878775 Ack:2765773981 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:31 connection established
2021/03/25 11:06:31 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 139 159 164 216 213 253 245 7 8 55 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:31 checksumer: &{sum:508030 oddByte:33 length:39}
2021/03/25 11:06:31 ret:  508063
2021/03/25 11:06:31 ret:  49318
2021/03/25 11:06:31 ret:  49318
2021/03/25 11:06:31 boom packet injected
2021/03/25 11:06:31 tcp packet: &{SrcPort:35743 DestPort:9000 Seq:4110878775 Ack:2765773981 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:33 tcp packet: &{SrcPort:43495 DestPort:9000 Seq:915054966 Ack:352566918 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:33 tcp packet: &{SrcPort:39629 DestPort:9000 Seq:1407203089 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:33 tcp packet: &{SrcPort:39629 DestPort:9000 Seq:1407203090 Ack:2867122834 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:33 connection established
2021/03/25 11:06:33 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 154 205 170 227 75 242 83 224 55 18 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:33 checksumer: &{sum:565654 oddByte:33 length:39}
2021/03/25 11:06:33 ret:  565687
2021/03/25 11:06:33 ret:  41407
2021/03/25 11:06:33 ret:  41407
2021/03/25 11:06:33 boom packet injected
2021/03/25 11:06:33 tcp packet: &{SrcPort:39629 DestPort:9000 Seq:1407203090 Ack:2867122834 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:35 tcp packet: &{SrcPort:45171 DestPort:9000 Seq:2903237655 Ack:1123565437 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:35 tcp packet: &{SrcPort:45049 DestPort:9000 Seq:1143151128 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:35 tcp packet: &{SrcPort:45049 DestPort:9000 Seq:1143151129 Ack:2098185195 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:35 connection established
2021/03/25 11:06:35 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 175 249 125 14 61 75 68 35 26 25 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:35 checksumer: &{sum:432964 oddByte:33 length:39}
2021/03/25 11:06:35 ret:  432997
2021/03/25 11:06:35 ret:  39787
2021/03/25 11:06:35 ret:  39787
2021/03/25 11:06:35 boom packet injected
2021/03/25 11:06:35 tcp packet: &{SrcPort:45049 DestPort:9000 Seq:1143151129 Ack:2098185195 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:37 tcp packet: &{SrcPort:35011 DestPort:9000 Seq:851430988 Ack:2648968025 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:37 tcp packet: &{SrcPort:36235 DestPort:9000 Seq:2800465759 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:37 tcp packet: &{SrcPort:36235 DestPort:9000 Seq:2800465760 Ack:56358300 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:37 connection established
2021/03/25 11:06:37 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 141 139 3 90 110 252 166 235 183 96 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:37 checksumer: &{sum:539096 oddByte:33 length:39}
2021/03/25 11:06:37 ret:  539129
2021/03/25 11:06:37 ret:  14849
2021/03/25 11:06:37 ret:  14849
2021/03/25 11:06:37 boom packet injected
2021/03/25 11:06:37 tcp packet: &{SrcPort:36235 DestPort:9000 Seq:2800465760 Ack:56358300 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:39 tcp packet: &{SrcPort:34429 DestPort:9000 Seq:599992693 Ack:1011425031 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:39 tcp packet: &{SrcPort:41313 DestPort:9000 Seq:37749165 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:39 tcp packet: &{SrcPort:41313 DestPort:9000 Seq:37749166 Ack:561236500 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:39 connection established
2021/03/25 11:06:39 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 161 97 33 114 67 116 2 64 1 174 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:39 checksumer: &{sum:475525 oddByte:33 length:39}
2021/03/25 11:06:39 ret:  475558
2021/03/25 11:06:39 ret:  16813
2021/03/25 11:06:39 ret:  16813
2021/03/25 11:06:39 boom packet injected
2021/03/25 11:06:39 tcp packet: &{SrcPort:41313 DestPort:9000 Seq:37749166 Ack:561236500 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:41 tcp packet: &{SrcPort:35743 DestPort:9000 Seq:4110878776 Ack:2765773982 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:41 tcp packet: &{SrcPort:43483 DestPort:9000 Seq:1558136473 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:41 tcp packet: &{SrcPort:43483 DestPort:9000 Seq:1558136474 Ack:2687653844 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:41 connection established
2021/03/25 11:06:41 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 169 219 160 48 209 52 92 223 70 154 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:41 checksumer: &{sum:509497 oddByte:33 length:39}
2021/03/25 11:06:41 ret:  509530
2021/03/25 11:06:41 ret:  50785
2021/03/25 11:06:41 ret:  50785
2021/03/25 11:06:41 boom packet injected
2021/03/25 11:06:41 tcp packet: &{SrcPort:43483 DestPort:9000 Seq:1558136474 Ack:2687653844 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:43 tcp packet: &{SrcPort:39629 DestPort:9000 Seq:1407203091 Ack:2867122835 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:43 tcp packet: &{SrcPort:37039 DestPort:9000 Seq:172174770 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:43 tcp packet: &{SrcPort:37039 DestPort:9000 Seq:172174771 Ack:3803818208 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:43 connection established
2021/03/25 11:06:43 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 144 175 226 184 34 64 10 67 45 179 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:43 checksumer: &{sum:502344 oddByte:33 length:39}
2021/03/25 11:06:43 ret:  502377
2021/03/25 11:06:43 ret:  43632
2021/03/25 11:06:43 ret:  43632
2021/03/25 11:06:43 boom packet injected
2021/03/25 11:06:43 tcp packet: &{SrcPort:37039 DestPort:9000 Seq:172174771 Ack:3803818208 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:45 tcp packet: &{SrcPort:45049 DestPort:9000 Seq:1143151130 Ack:2098185196 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:45 tcp packet: &{SrcPort:39311 DestPort:9000 Seq:2527419811 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:45 tcp packet: &{SrcPort:39311 DestPort:9000 Seq:2527419812 Ack:327832417 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:45 connection established
2021/03/25 11:06:45 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 153 143 19 136 204 193 150 165 93 164 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:45 checksumer: &{sum:536296 oddByte:33 length:39}
2021/03/25 11:06:45 ret:  536329
2021/03/25 11:06:45 ret:  12049
2021/03/25 11:06:45 ret:  12049
2021/03/25 11:06:45 boom packet injected
2021/03/25 11:06:45 tcp packet: &{SrcPort:39311 DestPort:9000 Seq:2527419812 Ack:327832417 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:47 tcp packet: &{SrcPort:36235 DestPort:9000 Seq:2800465761 Ack:56358301 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:47 tcp packet: &{SrcPort:41411 DestPort:9000 Seq:464834279 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:47 tcp packet: &{SrcPort:41411 DestPort:9000 Seq:464834280 Ack:3765947232 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:47 connection established
2021/03/25 11:06:47 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 161 195 224 118 68 192 27 180 206 232 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:47 checksumer: &{sum:566059 oddByte:33 length:39}
2021/03/25 11:06:47 ret:  566092
2021/03/25 11:06:47 ret:  41812
2021/03/25 11:06:47 ret:  41812
2021/03/25 11:06:47 boom packet injected
2021/03/25 11:06:47 tcp packet: &{SrcPort:41411 DestPort:9000 Seq:464834280 Ack:3765947232 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:49 tcp packet: &{SrcPort:41313 DestPort:9000 Seq:37749167 Ack:561236501 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:49 tcp packet: &{SrcPort:39335 DestPort:9000 Seq:3004695759 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:49 tcp packet: &{SrcPort:39335 DestPort:9000 Seq:3004695760 Ack:353937776 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:49 connection established
2021/03/25 11:06:49 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 153 167 21 23 34 208 179 24 4 208 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:49 checksumer: &{sum:492292 oddByte:33 length:39}
2021/03/25 11:06:49 ret:  492325
2021/03/25 11:06:49 ret:  33580
2021/03/25 11:06:49 ret:  33580
2021/03/25 11:06:49 boom packet injected
2021/03/25 11:06:49 tcp packet: &{SrcPort:39335 DestPort:9000 Seq:3004695760 Ack:353937776 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:51 tcp packet: &{SrcPort:43483 DestPort:9000 Seq:1558136475 Ack:2687653845 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:51 tcp packet: &{SrcPort:37143 DestPort:9000 Seq:2571551811 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:51 tcp packet: &{SrcPort:37143 DestPort:9000 Seq:2571551812 Ack:2635698720 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:51 connection established
2021/03/25 11:06:51 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 145 23 157 24 11 128 153 70 196 68 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:51 checksumer: &{sum:411411 oddByte:33 length:39}
2021/03/25 11:06:51 ret:  411444
2021/03/25 11:06:51 ret:  18234
2021/03/25 11:06:51 ret:  18234
2021/03/25 11:06:51 boom packet injected
2021/03/25 11:06:51 tcp packet: &{SrcPort:37143 DestPort:9000 Seq:2571551812 Ack:2635698720 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:53 tcp packet: &{SrcPort:37039 DestPort:9000 Seq:172174772 Ack:3803818209 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:53 tcp packet: &{SrcPort:46523 DestPort:9000 Seq:723252352 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:53 tcp packet: &{SrcPort:46523 DestPort:9000 Seq:723252353 Ack:42597021 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:53 connection established
2021/03/25 11:06:53 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 181 187 2 136 115 253 43 27 244 129 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:53 checksumer: &{sum:518598 oddByte:33 length:39}
2021/03/25 11:06:53 ret:  518631
2021/03/25 11:06:53 ret:  59886
2021/03/25 11:06:53 ret:  59886
2021/03/25 11:06:53 boom packet injected
2021/03/25 11:06:53 tcp packet: &{SrcPort:46523 DestPort:9000 Seq:723252353 Ack:42597021 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:55 tcp packet: &{SrcPort:39311 DestPort:9000 Seq:2527419813 Ack:327832418 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:55 tcp packet: &{SrcPort:43761 DestPort:9000 Seq:2628570201 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:55 tcp packet: &{SrcPort:43761 DestPort:9000 Seq:2628570202 Ack:1341036792 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:55 connection established
2021/03/25 11:06:55 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 170 241 79 237 18 88 156 172 204 90 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:55 checksumer: &{sum:543216 oddByte:33 length:39}
2021/03/25 11:06:55 ret:  543249
2021/03/25 11:06:55 ret:  18969
2021/03/25 11:06:55 ret:  18969
2021/03/25 11:06:55 boom packet injected
2021/03/25 11:06:55 tcp packet: &{SrcPort:43761 DestPort:9000 Seq:2628570202 Ack:1341036792 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:57 tcp packet: &{SrcPort:41411 DestPort:9000 Seq:464834281 Ack:3765947233 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:57 tcp packet: &{SrcPort:41383 DestPort:9000 Seq:2048610132 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:57 tcp packet: &{SrcPort:41383 DestPort:9000 Seq:2048610133 Ack:1991674326 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:57 connection established
2021/03/25 11:06:57 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 161 167 118 181 3 54 122 27 79 85 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:57 checksumer: &{sum:462688 oddByte:33 length:39}
2021/03/25 11:06:57 ret:  462721
2021/03/25 11:06:57 ret:  3976
2021/03/25 11:06:57 ret:  3976
2021/03/25 11:06:57 boom packet injected
2021/03/25 11:06:57 tcp packet: &{SrcPort:41383 DestPort:9000 Seq:2048610133 Ack:1991674326 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:59 tcp packet: &{SrcPort:39335 DestPort:9000 Seq:3004695761 Ack:353937777 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:59 tcp packet: &{SrcPort:37435 DestPort:9000 Seq:2806435213 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:06:59 tcp packet: &{SrcPort:37435 DestPort:9000 Seq:2806435214 Ack:3656082632 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:06:59 connection established
2021/03/25 11:06:59 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 146 59 217 233 222 40 167 70 205 142 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:06:59 checksumer: &{sum:470842 oddByte:33 length:39}
2021/03/25 11:06:59 ret:  470875
2021/03/25 11:06:59 ret:  12130
2021/03/25 11:06:59 ret:  12130
2021/03/25 11:06:59 boom packet injected
2021/03/25 11:06:59 tcp packet: &{SrcPort:37435 DestPort:9000 Seq:2806435214 Ack:3656082632 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:07:01 tcp packet: &{SrcPort:37143 DestPort:9000 Seq:2571551813 Ack:2635698721 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:07:01 tcp packet: &{SrcPort:41135 DestPort:9000 Seq:3978324517 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:07:01 tcp packet: &{SrcPort:41135 DestPort:9000 Seq:3978324518 Ack:973459446 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:07:01 connection established
2021/03/25 11:07:01 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 160 175 58 4 73 86 237 32 106 38 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:07:01 checksumer: &{sum:417015 oddByte:33 length:39}
2021/03/25 11:07:01 ret:  417048
2021/03/25 11:07:01 ret:  23838
2021/03/25 11:07:01 ret:  23838
2021/03/25 11:07:01 boom packet injected
2021/03/25 11:07:01 tcp packet: &{SrcPort:41135 DestPort:9000 Seq:3978324518 Ack:973459446 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:07:03 tcp packet: &{SrcPort:46523 DestPort:9000 Seq:723252354 Ack:42597022 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:07:03 tcp packet: &{SrcPort:44247 DestPort:9000 Seq:2378301667 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:07:03 tcp packet: &{SrcPort:44247 DestPort:9000 Seq:2378301668 Ack:4170449596 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:07:03 connection established
2021/03/25 11:07:03 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 172 215 248 146 124 28 141 194 0 228 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:07:03 checksumer: &{sum:538922 oddByte:33 length:39}
2021/03/25 11:07:03 ret:  538955
2021/03/25 11:07:03 ret:  14675
2021/03/25 11:07:03 ret:  14675
2021/03/25 11:07:03 boom packet injected
2021/03/25 11:07:03 tcp packet: &{SrcPort:44247 DestPort:9000 Seq:2378301668 Ack:4170449596 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:07:05 tcp packet: &{SrcPort:43761 DestPort:9000 Seq:2628570203 Ack:1341036793 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:07:05 tcp packet: &{SrcPort:37689 DestPort:9000 Seq:3789783165 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:07:05 tcp packet: &{SrcPort:37689 DestPort:9000 Seq:3789783166 Ack:3012492911 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:07:05 connection established
2021/03/25 11:07:05 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 147 57 179 141 119 207 225 227 128 126 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:07:05 checksumer: &{sum:525467 oddByte:33 length:39}
2021/03/25 11:07:05 ret:  525500
2021/03/25 11:07:05 ret:  1220
2021/03/25 11:07:05 ret:  1220
2021/03/25 11:07:05 boom packet injected
2021/03/25 11:07:05 tcp packet: &{SrcPort:37689 DestPort:9000 Seq:3789783166 Ack:3012492911 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:07:07 tcp packet: &{SrcPort:41383 DestPort:9000 Seq:2048610134 Ack:1991674327 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:07:07 tcp packet: &{SrcPort:45899 DestPort:9000 Seq:2221917784 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:07:07 tcp packet: &{SrcPort:45899 DestPort:9000 Seq:2221917785 Ack:946795679 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:07:07 connection established
2021/03/25 11:07:07 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 179 75 56 109 109 255 132 111 198 89 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:07:07 checksumer: &{sum:494879 oddByte:33 length:39}
2021/03/25 11:07:07 ret:  494912
2021/03/25 11:07:07 ret:  36167
2021/03/25 11:07:07 ret:  36167
2021/03/25 11:07:07 boom packet injected
2021/03/25 11:07:07 tcp packet: &{SrcPort:45899 DestPort:9000 Seq:2221917785 Ack:946795679 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6
2021/03/25 11:07:09 tcp packet: &{SrcPort:37435 DestPort:9000 Seq:2806435215 Ack:3656082633 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:07:09 tcp packet: &{SrcPort:41255 DestPort:9000 Seq:1517014876 Ack:0 Flags:40962 WindowSize:64240 Checksum:6628 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.6
2021/03/25 11:07:09 tcp packet: &{SrcPort:41255 DestPort:9000 Seq:1517014877 Ack:1453797271 Flags:32784 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.6
2021/03/25 11:07:09 connection established
2021/03/25 11:07:09 calling checksumTCP: 10.244.1.200 10.244.2.6 [35 40 161 39 86 165 168 247 90 107 207 93 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/25 11:07:09 checksumer: &{sum:497989 oddByte:33 length:39}
2021/03/25 11:07:09 ret:  498022
2021/03/25 11:07:09 ret:  39277
2021/03/25 11:07:09 ret:  39277
2021/03/25 11:07:09 boom packet injected
2021/03/25 11:07:09 tcp packet: &{SrcPort:41255 DestPort:9000 Seq:1517014877 Ack:1453797271 Flags:32785 WindowSize:502 Checksum:6620 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.6

Mar 25 11:07:10.399: INFO: boom-server OK: did not receive any RST packet
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 11:07:10.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-7558" for this suite.

• [SLOW TEST:81.199 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":54,"completed":37,"skipped":5388,"failed":4,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should allow pods to hairpin back to themselves through services"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking 
  should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 11:07:10.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
STEP: Performing setup for networking test in namespace nettest-6310
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 25 11:07:11.105: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 25 11:07:11.655: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:07:14.538: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:07:16.097: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:07:17.837: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:07:19.745: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:07:21.737: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:07:24.015: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:07:25.859: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:07:27.661: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:07:29.920: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:07:31.817: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:07:33.745: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 25 11:07:34.619: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 25 11:07:43.416: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 25 11:07:43.416: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 25 11:07:45.314: INFO: Service node-port-service in namespace nettest-6310 found.
Mar 25 11:07:46.863: INFO: Service session-affinity-service in namespace nettest-6310 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 25 11:07:47.997: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 25 11:07:49.858: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: checking kube-proxy URLs
STEP: Getting kube-proxy self URL /healthz
Mar 25 11:07:50.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=nettest-6310 exec host-test-container-pod -- /bin/sh -x -c curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz'
Mar 25 11:07:51.747: INFO: stderr: "+ curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz\n"
Mar 25 11:07:51.747: INFO: stdout: "HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nX-Content-Type-Options: nosniff\r\nDate: Thu, 25 Mar 2021 11:07:51 GMT\r\nContent-Length: 155\r\n\r\n{\"lastUpdated\": \"2021-03-25 11:07:51.739902486 +0000 UTC m=+270044.310835887\",\"currentTime\": \"2021-03-25 11:07:51.739902486 +0000 UTC m=+270044.310835887\"}"
STEP: Getting kube-proxy self URL /healthz
Mar 25 11:07:51.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=nettest-6310 exec host-test-container-pod -- /bin/sh -x -c curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz'
Mar 25 11:07:53.152: INFO: stderr: "+ curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz\n"
Mar 25 11:07:53.152: INFO: stdout: "HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nX-Content-Type-Options: nosniff\r\nDate: Thu, 25 Mar 2021 11:07:53 GMT\r\nContent-Length: 153\r\n\r\n{\"lastUpdated\": \"2021-03-25 11:07:53.13711698 +0000 UTC m=+270045.708050377\",\"currentTime\": \"2021-03-25 11:07:53.13711698 +0000 UTC m=+270045.708050377\"}"
STEP: Checking status code against http://localhost:10249/proxyMode
Mar 25 11:07:53.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=nettest-6310 exec host-test-container-pod -- /bin/sh -x -c curl -o /dev/null -i -q -s -w %{http_code} --connect-timeout 1 http://localhost:10249/proxyMode'
Mar 25 11:07:54.965: INFO: stderr: "+ curl -o /dev/null -i -q -s -w '%{http_code}' --connect-timeout 1 http://localhost:10249/proxyMode\n"
Mar 25 11:07:54.965: INFO: stdout: "200"
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 11:07:54.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6310" for this suite.

• [SLOW TEST:45.320 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
------------------------------
{"msg":"PASSED [sig-network] Networking should check kube-proxy urls","total":54,"completed":38,"skipped":5451,"failed":4,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should allow pods to hairpin back to themselves through services"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 11:07:55.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
STEP: creating a service with no endpoints
STEP: creating execpod-noendpoints on node latest-worker
Mar 25 11:07:58.942: INFO: Creating new exec pod
Mar 25 11:08:08.149: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node latest-worker
Mar 25 11:08:08.149: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-527 exec execpod-noendpoints4xcsj -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Mar 25 11:08:09.837: INFO: rc: 1
Mar 25 11:08:09.837: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=services-527 exec execpod-noendpoints4xcsj -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 11:08:09.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-527" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

• [SLOW TEST:14.202 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":54,"completed":39,"skipped":5471,"failed":4,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should allow pods to hairpin back to themselves through services"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should be able to handle large requests: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 11:08:10.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
STEP: Performing setup for networking test in namespace nettest-7756
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 25 11:08:11.738: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 25 11:08:13.262: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:08:15.566: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:08:17.615: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 25 11:08:19.688: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:08:21.459: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:08:23.939: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:08:25.299: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:08:27.448: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:08:29.671: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:08:31.540: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:08:33.606: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:08:35.938: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 25 11:08:37.515: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 25 11:08:38.315: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 25 11:08:49.049: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 25 11:08:49.049: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 25 11:08:50.834: INFO: Service node-port-service in namespace nettest-7756 found.
Mar 25 11:08:51.535: INFO: Service session-affinity-service in namespace nettest-7756 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 25 11:08:52.541: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 25 11:08:53.605: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) test-container-pod --> 10.96.247.164:80 (config.clusterIP)
Mar 25 11:08:53.908: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.212:9080/dial?request=echo?msg=42424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242&protocol=http&host=10.96.247.164&port=80&tries=1'] Namespace:nettest-7756 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 25 11:08:53.908: INFO: >>> kubeConfig: /root/.kube/config
Mar 25 11:08:54.076: INFO: Waiting for responses: map[]
Mar 25 11:08:54.076: INFO: reached 10.96.247.164 after 0/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 25 11:08:54.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7756" for this suite.

• [SLOW TEST:44.274 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: http
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: http","total":54,"completed":40,"skipped":5620,"failed":4,"failures":["[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should allow pods to hairpin back to themselves through services"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 25 11:08:54.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85
Mar 25 11:08:56.114: INFO: (0) /api/v1/nodes/latest-worker2:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/