I0321 23:20:43.934275 7 e2e.go:129] Starting e2e run "b27dc6d4-0263-4fdf-944c-cf0c2b3dd3c5" on Ginkgo node 1 {"msg":"Test Suite starting","total":54,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1616368842 - Will randomize all specs Will run 54 of 5737 specs Mar 21 23:20:44.056: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:20:44.058: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 21 23:20:44.168: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 21 23:20:44.243: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 21 23:20:44.243: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 23:20:44.243: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 21 23:20:44.289: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 21 23:20:44.289: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 21 23:20:44.289: INFO: e2e test version: v1.21.0-beta.1 Mar 21 23:20:44.290: INFO: kube-apiserver version: v1.21.0-alpha.0 Mar 21 23:20:44.290: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:20:44.317: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:492 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:20:44.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest Mar 21 23:20:44.516: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for service endpoints using hostNetwork /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:492 STEP: Performing setup for networking test in namespace nettest-4860 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 21 23:20:44.611: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 21 23:20:44.710: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:20:47.675: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:20:49.057: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:20:51.507: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:20:52.963: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:20:54.719: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:20:56.715: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:20:58.752: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:21:00.814: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:21:02.729: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:21:04.929: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:21:07.071: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 21 23:21:07.125: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 21 23:21:17.303: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 21 23:21:17.303: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 21 23:21:17.423: INFO: Service node-port-service in namespace nettest-4860 found. Mar 21 23:21:17.789: INFO: Service session-affinity-service in namespace nettest-4860 found. STEP: Waiting for NodePort service to expose endpoint Mar 21 23:21:18.814: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 21 23:21:20.324: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: pod-Service(hostNetwork): http STEP: dialing(http) test-container-pod --> 10.96.121.146:80 (config.clusterIP) Mar 21 23:21:20.739: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.63:9080/dial?request=hostname&protocol=http&host=10.96.121.146&port=80&tries=1'] Namespace:nettest-4860 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:21:20.739: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:21.090: INFO: Waiting for responses: map[latest-worker:{}] Mar 21 23:21:23.329: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.63:9080/dial?request=hostname&protocol=http&host=10.96.121.146&port=80&tries=1'] Namespace:nettest-4860 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:21:23.329: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:24.522: INFO: Waiting for responses: map[] Mar 21 23:21:24.522: INFO: reached 10.96.121.146 after 1/34 tries STEP: dialing(http) test-container-pod --> 172.18.0.9:30212 (nodeIP) Mar 21 23:21:24.627: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.63:9080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30212&tries=1'] Namespace:nettest-4860 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:21:24.627: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:24.970: INFO: Waiting for responses: map[latest-worker:{}] Mar 21 23:21:27.053: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.63:9080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30212&tries=1'] Namespace:nettest-4860 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:21:27.053: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:27.278: INFO: Waiting for responses: map[] Mar 21 23:21:27.279: INFO: reached 172.18.0.9 after 1/34 tries STEP: node-Service(hostNetwork): http STEP: dialing(http) 172.18.0.9 (node) --> 10.96.121.146:80 (config.clusterIP) Mar 21 23:21:27.279: INFO: Going to poll 10.96.121.146 on port 80 at least 0 times, with a maximum of 34 tries before failing Mar 21 23:21:27.281: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.96.121.146:80/hostName | grep -v '^\s*$'] Namespace:nettest-4860 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:21:27.281: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:27.427: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 21 23:21:29.659: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.96.121.146:80/hostName | grep -v '^\s*$'] Namespace:nettest-4860 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:21:29.659: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:30.608: INFO: Found all 2 expected endpoints: [latest-worker latest-worker2] STEP: dialing(http) 172.18.0.9 (node) --> 172.18.0.9:30212 (nodeIP) Mar 21 23:21:30.608: INFO: Going to poll 172.18.0.9 on port 30212 at least 0 times, with a maximum of 34 tries before failing Mar 21 23:21:31.114: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:30212/hostName | grep -v '^\s*$'] Namespace:nettest-4860 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:21:31.114: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:31.724: INFO: Waiting for [latest-worker] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker2]) Mar 21 23:21:34.161: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:30212/hostName | grep -v '^\s*$'] Namespace:nettest-4860 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:21:34.161: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:34.965: INFO: Found all 2 expected endpoints: [latest-worker latest-worker2] STEP: node-Service(hostNetwork): udp STEP: dialing(udp) 172.18.0.9 (node) --> 10.96.121.146:90 (config.clusterIP) Mar 21 23:21:34.965: INFO: Going to poll 10.96.121.146 on port 90 at least 0 times, with a maximum of 34 tries before failing Mar 21 23:21:35.013: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.121.146 90 | grep -v '^\s*$'] Namespace:nettest-4860 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:21:35.013: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:36.097: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 21 23:21:38.120: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.121.146 90 | grep -v '^\s*$'] Namespace:nettest-4860 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:21:38.120: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:40.337: INFO: Waiting for [latest-worker2] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker]) Mar 21 23:21:42.486: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.121.146 90 | grep -v '^\s*$'] Namespace:nettest-4860 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:21:42.486: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:43.823: INFO: Found all 2 expected endpoints: [latest-worker latest-worker2] STEP: dialing(udp) 172.18.0.9 (node) --> 172.18.0.9:31397 (nodeIP) Mar 21 23:21:43.823: INFO: Going to poll 172.18.0.9 on port 31397 at least 0 times, with a maximum of 34 tries before failing Mar 21 23:21:44.077: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 31397 | grep -v '^\s*$'] Namespace:nettest-4860 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:21:44.077: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:45.759: INFO: Waiting for [latest-worker] endpoints (expected=[latest-worker latest-worker2], actual=[latest-worker2]) Mar 21 23:21:48.325: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 31397 | grep -v '^\s*$'] Namespace:nettest-4860 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:21:48.325: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:50.864: INFO: Found all 2 expected endpoints: [latest-worker latest-worker2] STEP: handle large requests: http(hostNetwork) STEP: dialing(http) test-container-pod --> 10.96.121.146:80 (config.clusterIP) Mar 21 23:21:51.338: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.63:9080/dial?request=echo?msg=42424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242&protocol=http&host=10.96.121.146&port=80&tries=1'] Namespace:nettest-4860 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:21:51.338: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:53.930: INFO: Waiting for responses: map[] Mar 21 23:21:53.930: INFO: reached 10.96.121.146 after 0/34 tries STEP: handle large requests: udp(hostNetwork) STEP: dialing(udp) test-container-pod --> 10.96.121.146:90 (config.clusterIP) Mar 21 23:21:54.825: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.63:9080/dial?request=echo%20nooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo&protocol=udp&host=10.96.121.146&port=90&tries=1'] Namespace:nettest-4860 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:21:54.825: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:56.393: INFO: Waiting for responses: map[] Mar 21 23:21:56.393: INFO: reached 10.96.121.146 after 0/34 tries [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:21:56.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-4860" for this suite. • [SLOW TEST:72.859 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for service endpoints using hostNetwork /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:492 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork","total":54,"completed":1,"skipped":15,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130 [BeforeEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:21:57.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename conntrack STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96 [It] should be able to preserve UDP traffic when server pod cycles for a NodePort service /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130 STEP: creating a UDP service svc-udp with type=NodePort in conntrack-3032 STEP: creating a client pod for probing the service svc-udp Mar 21 23:22:06.002: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:22:08.304: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:22:10.553: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:22:12.750: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:22:14.544: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:22:16.652: INFO: The status of Pod pod-client is Running (Ready = true) Mar 21 23:22:17.754: INFO: Pod client logs: Sun Mar 21 23:22:14 UTC 2021 Sun Mar 21 23:22:14 UTC 2021 Try: 1 Sun Mar 21 23:22:14 UTC 2021 Try: 2 Sun Mar 21 23:22:14 UTC 2021 Try: 3 Sun Mar 21 23:22:14 UTC 2021 Try: 4 Sun Mar 21 23:22:14 UTC 2021 Try: 5 Sun Mar 21 23:22:14 UTC 2021 Try: 6 Sun Mar 21 23:22:14 UTC 2021 Try: 7 STEP: creating a backend pod pod-server-1 for the service svc-udp Mar 21 23:22:18.407: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:22:20.952: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:22:22.539: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:22:24.517: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:22:26.581: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:22:28.977: INFO: The status of Pod pod-server-1 is Running (Ready = true) STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-3032 to expose endpoints map[pod-server-1:[80]] Mar 21 23:22:29.855: INFO: successfully validated that service svc-udp in namespace conntrack-3032 exposes endpoints map[pod-server-1:[80]] STEP: checking client pod connected to the backend 1 on Node IP 172.18.0.13 STEP: creating a second backend pod pod-server-2 for the service svc-udp Mar 21 23:22:39.141: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:22:41.594: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:22:43.337: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:22:45.173: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:22:47.248: INFO: The status of Pod pod-server-2 is Running (Ready = true) Mar 21 23:22:47.382: INFO: Cleaning up pod-server-1 pod Mar 21 23:22:47.674: INFO: Waiting for pod pod-server-1 to disappear Mar 21 23:22:47.975: INFO: Pod pod-server-1 no longer exists STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-3032 to expose endpoints map[pod-server-2:[80]] Mar 21 23:22:48.643: INFO: successfully validated that service svc-udp in namespace conntrack-3032 exposes endpoints map[pod-server-2:[80]] STEP: checking client pod connected to the backend 2 on Node IP 172.18.0.13 [AfterEach] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:22:59.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "conntrack-3032" for this suite. • [SLOW TEST:63.298 seconds] [sig-network] Conntrack /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to preserve UDP traffic when server pod cycles for a NodePort service /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130 ------------------------------ {"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":54,"completed":2,"skipped":114,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for node-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:23:00.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for node-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212 STEP: Performing setup for networking test in namespace nettest-8060 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 21 23:23:02.308: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 21 23:23:03.291: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:23:05.314: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:23:07.810: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:23:09.626: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:23:11.419: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:23:13.450: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:23:15.295: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:23:17.627: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:23:19.308: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:23:21.296: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:23:23.294: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:23:25.354: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:23:27.323: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 21 23:23:27.336: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 21 23:23:35.671: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 21 23:23:35.671: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 21 23:23:35.988: INFO: Service node-port-service in namespace nettest-8060 found. Mar 21 23:23:36.632: INFO: Service session-affinity-service in namespace nettest-8060 found. STEP: Waiting for NodePort service to expose endpoint Mar 21 23:23:37.636: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 21 23:23:38.639: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(udp) 172.18.0.9 (node) --> 10.96.89.34:90 (config.clusterIP) Mar 21 23:23:38.989: INFO: Going to poll 10.96.89.34 on port 90 at least 0 times, with a maximum of 34 tries before failing Mar 21 23:23:38.992: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.89.34 90 | grep -v '^\s*$'] Namespace:nettest-8060 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:23:38.992: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:23:40.143: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 21 23:23:42.156: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.89.34 90 | grep -v '^\s*$'] Namespace:nettest-8060 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:23:42.156: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:23:43.318: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 21 23:23:45.329: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.96.89.34 90 | grep -v '^\s*$'] Namespace:nettest-8060 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:23:45.329: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:23:46.520: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1] STEP: dialing(udp) 172.18.0.9 (node) --> 172.18.0.9:31638 (nodeIP) Mar 21 23:23:46.520: INFO: Going to poll 172.18.0.9 on port 31638 at least 0 times, with a maximum of 34 tries before failing Mar 21 23:23:46.539: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 31638 | grep -v '^\s*$'] Namespace:nettest-8060 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:23:46.539: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:23:47.698: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0]) Mar 21 23:23:49.707: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 31638 | grep -v '^\s*$'] Namespace:nettest-8060 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:23:49.707: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:23:51.014: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:23:51.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-8060" for this suite. • [SLOW TEST:51.141 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for node-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for node-Service: udp","total":54,"completed":3,"skipped":130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:323 [BeforeEach] Change stubDomain /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:23:51.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns-config-map STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to change stubDomain configuration [Slow][Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:323 STEP: Finding a DNS pod Mar 21 23:23:51.869: INFO: Using DNS pod: coredns-74ff55c5b-2wlxf Mar 21 23:23:51.999: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-configmap-6f24610d-f95d-40f0-b902-9cc7e80281fb dns-config-map-1418 57e5ccda-29ca-4fdf-8f35-ed73c64fde39 6920730 0 2021-03-21 23:23:51 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-21 23:23:51 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":10101,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlc4p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlc4p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:10101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlc4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 23:23:59.349: INFO: Created service &Service{ObjectMeta:{e2e-dns-configmap dns-config-map-1418 aa58b1ac-c5ad-4039-a68d-aeb060c473e4 6920868 0 2021-03-21 23:23:59 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-21 23:23:58 +0000 UTC FieldsV1 {"f:spec":{"f:ports":{".":{},"k:{\"port\":10101,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{".":{},"f:app":{}},"f:sessionAffinity":{},"f:type":{}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:10101,TargetPort:{0 10101 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: e2e-dns-configmap,},ClusterIP:10.96.170.205,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:nil,ClusterIPs:[10.96.170.205],IPFamilies:[],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} Mar 21 23:23:59.573: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-87579fa7-b7ca-40e7-829f-b669f23511ed dns-config-map-1418 87b976c5-2184-48a3-86f9-3ab04f1cb896 6920880 0 2021-03-21 23:23:59 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-21 23:23:59 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-sgts2,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:default-token-jlc4p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlc4p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:default-token-jlc4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 { health ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } forward . 10.244.2.249 } acme.local:53 { forward . 10.244.2.249 }] BinaryData:map[]} Mar 21 23:24:08.934: INFO: ExecWithOptions {Command:[dig +short abc.acme.local] Namespace:dns-config-map-1418 PodName:e2e-dns-configmap-6f24610d-f95d-40f0-b902-9cc7e80281fb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:24:08.934: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:24:09.286: INFO: Running dig: [dig +short abc.acme.local], stdout: "", stderr: "", err: Mar 21 23:24:10.287: INFO: ExecWithOptions {Command:[dig +short abc.acme.local] Namespace:dns-config-map-1418 PodName:e2e-dns-configmap-6f24610d-f95d-40f0-b902-9cc7e80281fb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:24:10.287: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:24:22.125: INFO: Running dig: [dig +short abc.acme.local], stdout: "1.1.1.1", stderr: "", err: Mar 21 23:24:22.126: INFO: ExecWithOptions {Command:[dig +short def.acme.local] Namespace:dns-config-map-1418 PodName:e2e-dns-configmap-6f24610d-f95d-40f0-b902-9cc7e80281fb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:24:22.126: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:24:23.160: INFO: Running dig: [dig +short def.acme.local], stdout: "2.2.2.2", stderr: "", err: Mar 21 23:24:23.161: INFO: ExecWithOptions {Command:[dig +short widget.local] Namespace:dns-config-map-1418 PodName:e2e-dns-configmap-6f24610d-f95d-40f0-b902-9cc7e80281fb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:24:23.161: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:24:24.169: INFO: Running dig: [dig +short widget.local], stdout: "3.3.3.3", stderr: "", err: STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } ] BinaryData:map[]} Mar 21 23:24:26.349: INFO: ExecWithOptions {Command:[dig +short abc.acme.local] Namespace:dns-config-map-1418 PodName:e2e-dns-configmap-6f24610d-f95d-40f0-b902-9cc7e80281fb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:24:26.349: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:24:37.046: INFO: Running dig: [dig +short abc.acme.local], stdout: "", stderr: "", err: STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } ] BinaryData:map[]} [AfterEach] Change stubDomain /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:24:39.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-config-map-1418" for this suite. • [SLOW TEST:48.932 seconds] [sig-network] DNS configMap nameserver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Change stubDomain /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:320 should be able to change stubDomain configuration [Slow][Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:323 ------------------------------ {"msg":"PASSED [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]","total":54,"completed":4,"skipped":185,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:24:40.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177 STEP: creating service externalip-test with type=clusterIP in namespace services-9607 STEP: creating replication controller externalip-test in namespace services-9607 I0321 23:24:44.555081 7 runners.go:190] Created replication controller with name: externalip-test, namespace: services-9607, replica count: 2 I0321 23:24:47.605623 7 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 23:24:50.605858 7 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 21 23:24:50.605: INFO: Creating new exec pod E0321 23:24:57.158572 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:24:58.399534 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:25:00.257904 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:25:03.974951 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:25:10.832338 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:25:33.728715 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:26:13.493946 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:26:47.188371 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 21 23:26:57.157: FAIL: Unexpected error: <*errors.errorString | 0xc000426b10>: { s: "no subset of available IP address found for the endpoint externalip-test within timeout 2m0s", } no subset of available IP address found for the endpoint externalip-test within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.12() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1201 +0x30f k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00386cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00386cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00386cd80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-9607". STEP: Found 14 events. Mar 21 23:26:57.462: INFO: At 2021-03-21 23:24:44 +0000 UTC - event for externalip-test: {replication-controller } SuccessfulCreate: Created pod: externalip-test-ls4kf Mar 21 23:26:57.462: INFO: At 2021-03-21 23:24:44 +0000 UTC - event for externalip-test: {replication-controller } SuccessfulCreate: Created pod: externalip-test-2g8rm Mar 21 23:26:57.462: INFO: At 2021-03-21 23:24:44 +0000 UTC - event for externalip-test-2g8rm: {default-scheduler } Scheduled: Successfully assigned services-9607/externalip-test-2g8rm to latest-worker Mar 21 23:26:57.462: INFO: At 2021-03-21 23:24:44 +0000 UTC - event for externalip-test-ls4kf: {default-scheduler } Scheduled: Successfully assigned services-9607/externalip-test-ls4kf to latest-worker Mar 21 23:26:57.462: INFO: At 2021-03-21 23:24:46 +0000 UTC - event for externalip-test-2g8rm: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:26:57.462: INFO: At 2021-03-21 23:24:46 +0000 UTC - event for externalip-test-ls4kf: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:26:57.462: INFO: At 2021-03-21 23:24:48 +0000 UTC - event for externalip-test-2g8rm: {kubelet latest-worker} Created: Created container externalip-test Mar 21 23:26:57.462: INFO: At 2021-03-21 23:24:48 +0000 UTC - event for externalip-test-2g8rm: {kubelet latest-worker} Started: Started container externalip-test Mar 21 23:26:57.462: INFO: At 2021-03-21 23:24:49 +0000 UTC - event for externalip-test-ls4kf: {kubelet latest-worker} Created: Created container externalip-test Mar 21 23:26:57.462: INFO: At 2021-03-21 23:24:49 +0000 UTC - event for externalip-test-ls4kf: {kubelet latest-worker} Started: Started container externalip-test Mar 21 23:26:57.462: INFO: At 2021-03-21 23:24:50 +0000 UTC - event for execpod7sbxc: {default-scheduler } Scheduled: Successfully assigned services-9607/execpod7sbxc to latest-worker Mar 21 23:26:57.462: INFO: At 2021-03-21 23:24:52 +0000 UTC - event for execpod7sbxc: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:26:57.462: INFO: At 2021-03-21 23:24:54 +0000 UTC - event for execpod7sbxc: {kubelet latest-worker} Started: Started container agnhost-container Mar 21 23:26:57.462: INFO: At 2021-03-21 23:24:54 +0000 UTC - event for execpod7sbxc: {kubelet latest-worker} Created: Created container agnhost-container Mar 21 23:26:57.508: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:26:57.508: INFO: execpod7sbxc latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:50 +0000 UTC }] Mar 21 23:26:57.508: INFO: externalip-test-2g8rm latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:44 +0000 UTC }] Mar 21 23:26:57.508: INFO: externalip-test-ls4kf latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:44 +0000 UTC }] Mar 21 23:26:57.508: INFO: Mar 21 23:26:57.568: INFO: Logging node info for node latest-control-plane Mar 21 23:26:57.571: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6921571 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:24:28 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:24:28 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:24:28 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:24:28 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:26:57.572: INFO: Logging kubelet events for node latest-control-plane Mar 21 23:26:57.629: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 21 23:26:57.738: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:57.738: INFO: Container etcd ready: true, restart count 0 Mar 21 23:26:57.738: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:57.738: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:26:57.738: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:57.738: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:26:57.738: INFO: coredns-74ff55c5b-lv4vw started at 2021-03-21 23:24:39 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:57.738: INFO: Container coredns ready: true, restart count 0 Mar 21 23:26:57.738: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:57.738: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 21 23:26:57.738: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:57.738: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 21 23:26:57.738: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:57.738: INFO: Container kube-scheduler ready: true, restart count 0 Mar 21 23:26:57.738: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:57.738: INFO: Container kube-apiserver ready: true, restart count 0 W0321 23:26:58.032149 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:26:58.582: INFO: Latency metrics for node latest-control-plane Mar 21 23:26:58.582: INFO: Logging node info for node latest-worker Mar 21 23:26:58.681: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6923175 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 19:07:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:23:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:26:58.683: INFO: Logging kubelet events for node latest-worker Mar 21 23:26:59.028: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 21 23:26:59.121: INFO: execpod7sbxc started at 2021-03-21 23:24:50 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:59.121: INFO: Container agnhost-container ready: true, restart count 0 Mar 21 23:26:59.121: INFO: success started at 2021-03-21 23:23:53 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:59.121: INFO: Container success ready: false, restart count 0 Mar 21 23:26:59.121: INFO: failure-1 started at 2021-03-21 23:23:59 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:59.121: INFO: Container failure-1 ready: false, restart count 0 Mar 21 23:26:59.121: INFO: failure-4 started at 2021-03-21 23:26:51 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:59.121: INFO: Container failure-4 ready: false, restart count 0 Mar 21 23:26:59.121: INFO: failure-3 started at 2021-03-21 23:25:15 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:59.121: INFO: Container failure-3 ready: false, restart count 1 Mar 21 23:26:59.121: INFO: chaos-daemon-qkndt started at 2021-03-21 18:05:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:59.121: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:26:59.121: INFO: externalip-test-2g8rm started at 2021-03-21 23:24:44 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:59.121: INFO: Container externalip-test ready: true, restart count 0 Mar 21 23:26:59.121: INFO: kindnet-sbskd started at 2021-03-21 18:05:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:59.121: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:26:59.121: INFO: ss2-1 started at 2021-03-21 23:25:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:59.121: INFO: Container webserver ready: false, restart count 0 Mar 21 23:26:59.121: INFO: inclusterclient started at 2021-03-21 23:22:57 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:59.121: INFO: Container inclusterclient ready: true, restart count 0 Mar 21 23:26:59.121: INFO: ss2-0 started at 2021-03-21 23:26:45 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:59.121: INFO: Container webserver ready: true, restart count 0 Mar 21 23:26:59.121: INFO: externalip-test-ls4kf started at 2021-03-21 23:24:44 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:59.121: INFO: Container externalip-test ready: true, restart count 0 Mar 21 23:26:59.121: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:59.121: INFO: Container kube-proxy ready: true, restart count 0 W0321 23:26:59.174879 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:26:59.598: INFO: Latency metrics for node latest-worker Mar 21 23:26:59.598: INFO: Logging node info for node latest-worker2 Mar 21 23:27:00.041: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6920639 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 18:46:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:23:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:27:00.043: INFO: Logging kubelet events for node latest-worker2 Mar 21 23:27:00.305: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 21 23:27:00.951: INFO: chaos-daemon-gfm87 started at 2021-03-21 17:24:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:00.951: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:27:00.951: INFO: chaos-controller-manager-69c479c674-hcpp6 started at 2021-03-21 18:05:18 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:00.951: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:27:00.951: INFO: startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 started at 2021-03-21 23:25:51 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:00.951: INFO: Container busybox ready: true, restart count 0 Mar 21 23:27:00.951: INFO: without-label started at 2021-03-21 23:26:59 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:00.951: INFO: Container without-label ready: false, restart count 0 Mar 21 23:27:00.951: INFO: failure-2 started at 2021-03-21 23:24:07 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:00.951: INFO: Container failure-2 ready: true, restart count 1 Mar 21 23:27:00.951: INFO: kindnet-lhbxs started at 2021-03-21 17:24:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:00.951: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:27:00.951: INFO: explicit-nonroot-uid started at 2021-03-21 23:26:58 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:00.951: INFO: Container explicit-nonroot-uid ready: false, restart count 0 Mar 21 23:27:00.951: INFO: coredns-74ff55c5b-kcjgk started at 2021-03-21 23:24:39 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:00.951: INFO: Container coredns ready: true, restart count 0 Mar 21 23:27:00.951: INFO: rally-ae1e1e5d-vg2fpj0j-mktsk started at 2021-03-21 23:26:51 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:00.951: INFO: Container rally-ae1e1e5d-vg2fpj0j ready: false, restart count 0 Mar 21 23:27:00.951: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:00.951: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:27:00.951: INFO: ss2-2 started at 2021-03-21 23:24:35 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:00.951: INFO: Container webserver ready: true, restart count 0 Mar 21 23:27:00.951: INFO: rally-ae1e1e5d-vg2fpj0j-hjbtf started at 2021-03-21 23:26:45 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:00.951: INFO: Container rally-ae1e1e5d-vg2fpj0j ready: true, restart count 0 W0321 23:27:01.269901 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:27:02.000: INFO: Latency metrics for node latest-worker2 Mar 21 23:27:02.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9607" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [142.178 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177 Mar 21 23:26:57.157: Unexpected error: <*errors.errorString | 0xc000426b10>: { s: "no subset of available IP address found for the endpoint externalip-test within timeout 2m0s", } no subset of available IP address found for the endpoint externalip-test within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1201 ------------------------------ {"msg":"FAILED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":54,"completed":4,"skipped":651,"failed":1,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for node-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:27:02.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for node-Service: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 STEP: Performing setup for networking test in namespace nettest-1431 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 21 23:27:03.295: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 21 23:27:04.117: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:27:06.535: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:27:08.180: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:27:10.172: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:27:12.647: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:27:14.752: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:27:16.321: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:27:18.507: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:27:20.184: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:27:22.150: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 21 23:27:22.869: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 21 23:27:25.068: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 21 23:27:27.109: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 21 23:27:28.893: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 21 23:27:31.194: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 21 23:27:33.270: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 21 23:27:35.280: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 21 23:27:37.125: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 21 23:27:39.098: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 21 23:27:40.952: FAIL: Unexpected error: <*errors.StatusError | 0xc000f58500>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "pods \"netserver-1\" not found", Reason: "NotFound", Details: {Name: "netserver-1", Group: "", Kind: "pods", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } pods "netserver-1" not found occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000132380, 0x6b61659, 0x9, 0xc003104420, 0x0, 0xc000500400, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:829 +0x4cd k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000132380, 0xc003104420) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:731 +0x7b k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000132380, 0xc003104420) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:746 +0x50 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0010fedc0, 0xc002835198, 0x1, 0x1, 0x1f9) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:129 +0x165 k8s.io/kubernetes/test/e2e/network.glob..func20.6.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:199 +0x6d k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00386cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00386cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00386cd80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "nettest-1431". STEP: Found 10 events. Mar 21 23:27:41.193: INFO: At 2021-03-21 23:27:03 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-1431/netserver-0 to latest-worker Mar 21 23:27:41.193: INFO: At 2021-03-21 23:27:04 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-1431/netserver-1 to latest-worker2 Mar 21 23:27:41.193: INFO: At 2021-03-21 23:27:05 +0000 UTC - event for netserver-0: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:27:41.193: INFO: At 2021-03-21 23:27:05 +0000 UTC - event for netserver-1: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:27:41.193: INFO: At 2021-03-21 23:27:07 +0000 UTC - event for netserver-0: {kubelet latest-worker} Created: Created container webserver Mar 21 23:27:41.193: INFO: At 2021-03-21 23:27:07 +0000 UTC - event for netserver-1: {kubelet latest-worker2} Created: Created container webserver Mar 21 23:27:41.193: INFO: At 2021-03-21 23:27:08 +0000 UTC - event for netserver-0: {kubelet latest-worker} Started: Started container webserver Mar 21 23:27:41.193: INFO: At 2021-03-21 23:27:08 +0000 UTC - event for netserver-1: {kubelet latest-worker2} Started: Started container webserver Mar 21 23:27:41.193: INFO: At 2021-03-21 23:27:08 +0000 UTC - event for netserver-1: {taint-controller } TaintManagerEviction: Marking for deletion Pod nettest-1431/netserver-1 Mar 21 23:27:41.193: INFO: At 2021-03-21 23:27:10 +0000 UTC - event for netserver-1: {kubelet latest-worker2} Killing: Stopping container webserver Mar 21 23:27:41.315: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:27:41.315: INFO: netserver-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:03 +0000 UTC }] Mar 21 23:27:41.315: INFO: Mar 21 23:27:41.626: INFO: Logging node info for node latest-control-plane Mar 21 23:27:41.848: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6921571 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:24:28 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:24:28 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:24:28 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:24:28 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:27:41.849: INFO: Logging kubelet events for node latest-control-plane Mar 21 23:27:42.444: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 21 23:27:42.525: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:42.525: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:27:42.525: INFO: coredns-74ff55c5b-lv4vw started at 2021-03-21 23:24:39 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:42.525: INFO: Container coredns ready: true, restart count 0 Mar 21 23:27:42.525: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:42.525: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 21 23:27:42.525: INFO: coredns-74ff55c5b-rwjjj started at 2021-03-21 23:27:10 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:42.525: INFO: Container coredns ready: true, restart count 0 Mar 21 23:27:42.525: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:42.525: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 21 23:27:42.525: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:42.525: INFO: Container kube-scheduler ready: true, restart count 0 Mar 21 23:27:42.525: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:42.525: INFO: Container kube-apiserver ready: true, restart count 0 Mar 21 23:27:42.525: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:42.525: INFO: Container etcd ready: true, restart count 0 Mar 21 23:27:42.525: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:42.525: INFO: Container kube-proxy ready: true, restart count 0 W0321 23:27:42.794328 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:27:43.806: INFO: Latency metrics for node latest-control-plane Mar 21 23:27:43.806: INFO: Logging node info for node latest-worker Mar 21 23:27:44.448: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6923175 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 19:07:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:23:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:27:44.449: INFO: Logging kubelet events for node latest-worker Mar 21 23:27:44.646: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 21 23:27:45.250: INFO: kindnet-sbskd started at 2021-03-21 18:05:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:45.250: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:27:45.250: INFO: ss2-1 started at 2021-03-21 23:25:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:45.250: INFO: Container webserver ready: true, restart count 0 Mar 21 23:27:45.250: INFO: inclusterclient started at 2021-03-21 23:22:57 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:45.250: INFO: Container inclusterclient ready: true, restart count 0 Mar 21 23:27:45.250: INFO: netserver-0 started at 2021-03-21 23:27:03 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:45.250: INFO: Container webserver ready: true, restart count 0 Mar 21 23:27:45.250: INFO: ss2-0 started at 2021-03-21 23:26:45 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:45.250: INFO: Container webserver ready: true, restart count 0 Mar 21 23:27:45.250: INFO: pfpod started at 2021-03-21 23:27:40 +0000 UTC (0+2 container statuses recorded) Mar 21 23:27:45.250: INFO: Container portforwardtester ready: false, restart count 0 Mar 21 23:27:45.250: INFO: Container readiness ready: false, restart count 0 Mar 21 23:27:45.250: INFO: chaos-controller-manager-69c479c674-7xglh started at 2021-03-21 23:27:10 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:45.250: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:27:45.250: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:45.250: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:27:45.250: INFO: agnhost-primary-928pz started at 2021-03-21 23:27:30 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:45.250: INFO: Container agnhost-primary ready: true, restart count 0 Mar 21 23:27:45.250: INFO: agnhost-primary-6jf8p started at 2021-03-21 23:27:31 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:45.250: INFO: Container agnhost-primary ready: true, restart count 0 Mar 21 23:27:45.250: INFO: chaos-daemon-qkndt started at 2021-03-21 18:05:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:45.250: INFO: Container chaos-daemon ready: true, restart count 0 W0321 23:27:45.488795 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:27:46.756: INFO: Latency metrics for node latest-worker Mar 21 23:27:46.756: INFO: Logging node info for node latest-worker2 Mar 21 23:27:46.862: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6927107 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 18:46:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:23:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:27:46.863: INFO: Logging kubelet events for node latest-worker2 Mar 21 23:27:46.902: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 21 23:27:47.324: INFO: kindnet-tgcxf started at 2021-03-21 23:27:37 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:47.324: INFO: Container kindnet-cni ready: false, restart count 0 Mar 21 23:27:47.324: INFO: rally-ae1e1e5d-vg2fpj0j-mktsk started at 2021-03-21 23:26:51 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:47.324: INFO: Container rally-ae1e1e5d-vg2fpj0j ready: false, restart count 0 Mar 21 23:27:47.324: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:47.324: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:27:47.324: INFO: ss2-2 started at 2021-03-21 23:27:37 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:47.324: INFO: Container webserver ready: true, restart count 0 Mar 21 23:27:47.324: INFO: startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 started at 2021-03-21 23:25:51 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:47.324: INFO: Container busybox ready: false, restart count 0 Mar 21 23:27:47.324: INFO: chaos-daemon-qdvm8 started at 2021-03-21 23:27:35 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:47.324: INFO: Container chaos-daemon ready: false, restart count 0 Mar 21 23:27:47.324: INFO: explicit-root-uid started at 2021-03-21 23:27:19 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:47.324: INFO: Container explicit-root-uid ready: false, restart count 0 Mar 21 23:27:47.324: INFO: coredns-74ff55c5b-kcjgk started at 2021-03-21 23:24:39 +0000 UTC (0+1 container statuses recorded) Mar 21 23:27:47.324: INFO: Container coredns ready: false, restart count 0 W0321 23:27:47.609546 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:27:48.038: INFO: Latency metrics for node latest-worker2 Mar 21 23:27:48.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-1431" for this suite. • Failure [45.887 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for node-Service: http [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198 Mar 21 23:27:40.952: Unexpected error: <*errors.StatusError | 0xc000f58500>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "pods \"netserver-1\" not found", Reason: "NotFound", Details: {Name: "netserver-1", Group: "", Kind: "pods", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } pods "netserver-1" not found occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:829 ------------------------------ {"msg":"FAILED [sig-network] Networking Granular Checks: Services should function for node-Service: http","total":54,"completed":4,"skipped":679,"failed":2,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] ESIPP [Slow] should handle updates to ExternalTrafficPolicy field /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095 [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:27:48.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 Mar 21 23:27:49.365: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:27:49.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-4410" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.873 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should handle updates to ExternalTrafficPolicy field [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:27:49.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for client IP based session affinity: udp [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434 STEP: Performing setup for networking test in namespace nettest-4013 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 21 23:27:49.951: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 21 23:27:50.160: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:27:52.261: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:27:54.326: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:27:56.182: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:27:58.199: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:28:00.407: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:28:02.402: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:28:04.480: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:28:06.227: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:28:08.211: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 21 23:28:08.331: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 21 23:28:10.362: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 21 23:28:17.704: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 21 23:28:17.704: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 21 23:28:19.963: INFO: Service node-port-service in namespace nettest-4013 found. Mar 21 23:28:20.454: INFO: Service session-affinity-service in namespace nettest-4013 found. STEP: Waiting for NodePort service to expose endpoint Mar 21 23:28:21.528: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 21 23:28:22.541: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(udp) test-container-pod --> 10.96.61.165:90 Mar 21 23:28:22.652: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.120:9080/dial?request=hostName&protocol=udp&host=10.96.61.165&port=90&tries=1'] Namespace:nettest-4013 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:22.652: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:22.827: INFO: Tries: 10, in try: 0, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4013, hostIp: 172.18.0.13, podIp: 10.244.1.120, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC }]" } Mar 21 23:28:24.894: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.120:9080/dial?request=hostName&protocol=udp&host=10.96.61.165&port=90&tries=1'] Namespace:nettest-4013 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:24.894: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:25.075: INFO: Tries: 10, in try: 1, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4013, hostIp: 172.18.0.13, podIp: 10.244.1.120, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC }]" } Mar 21 23:28:27.092: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.120:9080/dial?request=hostName&protocol=udp&host=10.96.61.165&port=90&tries=1'] Namespace:nettest-4013 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:27.093: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:27.207: INFO: Tries: 10, in try: 2, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4013, hostIp: 172.18.0.13, podIp: 10.244.1.120, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC }]" } Mar 21 23:28:29.234: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.120:9080/dial?request=hostName&protocol=udp&host=10.96.61.165&port=90&tries=1'] Namespace:nettest-4013 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:29.234: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:29.447: INFO: Tries: 10, in try: 3, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4013, hostIp: 172.18.0.13, podIp: 10.244.1.120, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC }]" } Mar 21 23:28:31.472: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.120:9080/dial?request=hostName&protocol=udp&host=10.96.61.165&port=90&tries=1'] Namespace:nettest-4013 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:31.472: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:31.690: INFO: Tries: 10, in try: 4, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4013, hostIp: 172.18.0.13, podIp: 10.244.1.120, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC }]" } Mar 21 23:28:33.703: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.120:9080/dial?request=hostName&protocol=udp&host=10.96.61.165&port=90&tries=1'] Namespace:nettest-4013 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:33.703: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:33.917: INFO: Tries: 10, in try: 5, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4013, hostIp: 172.18.0.13, podIp: 10.244.1.120, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC }]" } Mar 21 23:28:35.955: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.120:9080/dial?request=hostName&protocol=udp&host=10.96.61.165&port=90&tries=1'] Namespace:nettest-4013 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:35.955: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:36.135: INFO: Tries: 10, in try: 6, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4013, hostIp: 172.18.0.13, podIp: 10.244.1.120, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC }]" } Mar 21 23:28:38.148: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.120:9080/dial?request=hostName&protocol=udp&host=10.96.61.165&port=90&tries=1'] Namespace:nettest-4013 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:38.148: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:38.315: INFO: Tries: 10, in try: 7, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4013, hostIp: 172.18.0.13, podIp: 10.244.1.120, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC }]" } Mar 21 23:28:40.340: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.120:9080/dial?request=hostName&protocol=udp&host=10.96.61.165&port=90&tries=1'] Namespace:nettest-4013 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:40.340: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:40.507: INFO: Tries: 10, in try: 8, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4013, hostIp: 172.18.0.13, podIp: 10.244.1.120, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC }]" } Mar 21 23:28:43.049: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.120:9080/dial?request=hostName&protocol=udp&host=10.96.61.165&port=90&tries=1'] Namespace:nettest-4013 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:43.049: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:43.525: INFO: Tries: 10, in try: 9, stdout: {"responses":["netserver-0"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-4013, hostIp: 172.18.0.13, podIp: 10.244.1.120, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:28:11 +0000 UTC }]" } [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:28:45.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-4013" for this suite. • [SLOW TEST:56.802 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for client IP based session affinity: udp [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]","total":54,"completed":5,"skipped":954,"failed":2,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Netpol API should support creating NetworkPolicy API operations /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48 [BeforeEach] [sig-network] Netpol API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:28:46.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename netpol STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating NetworkPolicy API operations /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Mar 21 23:28:47.221: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Mar 21 23:28:47.330: INFO: starting watch STEP: patching STEP: updating Mar 21 23:28:47.443: INFO: waiting for watch events with expected annotations Mar 21 23:28:47.443: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"} Mar 21 23:28:47.443: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Netpol API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:28:48.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "netpol-4657" for this suite. •{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":54,"completed":6,"skipped":1071,"failed":2,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should create endpoints for unready pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:28:48.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should create endpoints for unready pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624 STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod] STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-fb942ed3-d999-4f7c-beb3-43f9e5831e2b] STEP: Verifying pods for RC slow-terminating-unready-pod Mar 21 23:28:51.108: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: trying to dial each unique pod Mar 21 23:28:57.356: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-mxrfl]: "NOW: 2021-03-21 23:28:57.355453757 +0000 UTC m=+1.809789768", 1 of 1 required successes so far STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-6999.svc.cluster.local Mar 21 23:28:57.356: INFO: Creating new exec pod Mar 21 23:29:03.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpod-j4l7f -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6999.svc.cluster.local:80/' Mar 21 23:29:10.894: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6999.svc.cluster.local:80/\n" Mar 21 23:29:10.894: INFO: stdout: "NOW: 2021-03-21 23:29:10.880186217 +0000 UTC m=+15.334522178" STEP: Scaling down replication controller to zero STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-6999 to 0 STEP: Update service to not tolerate unready services STEP: Check if pod is unreachable Mar 21 23:29:16.843: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpod-j4l7f -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6999.svc.cluster.local:80/; test "$?" -ne "0"' Mar 21 23:29:17.693: INFO: rc: 1 Mar 21 23:29:17.693: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: NOW: 2021-03-21 23:29:17.683995266 +0000 UTC m=+22.138331209, err error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpod-j4l7f -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6999.svc.cluster.local:80/; test "$?" -ne "0": Command stdout: NOW: 2021-03-21 23:29:17.683995266 +0000 UTC m=+22.138331209 stderr: + curl -q -s --connect-timeout 2 http://tolerate-unready.services-6999.svc.cluster.local:80/ + test 0 -ne 0 command terminated with exit code 1 error: exit status 1 Mar 21 23:29:19.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpod-j4l7f -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6999.svc.cluster.local:80/; test "$?" -ne "0"' Mar 21 23:29:21.480: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6999.svc.cluster.local:80/\n+ test 7 -ne 0\n" Mar 21 23:29:21.480: INFO: stdout: "" STEP: Update service to tolerate unready services again STEP: Check if terminating pod is available through service Mar 21 23:29:21.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpod-j4l7f -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6999.svc.cluster.local:80/' Mar 21 23:29:23.173: INFO: rc: 7 Mar 21 23:29:23.173: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpod-j4l7f -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6999.svc.cluster.local:80/: Command stdout: stderr: + curl -q -s --connect-timeout 2 http://tolerate-unready.services-6999.svc.cluster.local:80/ command terminated with exit code 7 error: exit status 7 Mar 21 23:29:25.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-6999 exec execpod-j4l7f -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-6999.svc.cluster.local:80/' Mar 21 23:29:25.941: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-6999.svc.cluster.local:80/\n" Mar 21 23:29:25.941: INFO: stdout: "NOW: 2021-03-21 23:29:25.932298691 +0000 UTC m=+30.386634635" STEP: Remove pods immediately STEP: stopping RC slow-terminating-unready-pod in namespace services-6999 STEP: deleting service tolerate-unready in namespace services-6999 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:29:29.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6999" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:40.718 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create endpoints for unready pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624 ------------------------------ {"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":54,"completed":7,"skipped":1271,"failed":2,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:332 [BeforeEach] Forward PTR lookup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:29:29.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns-config-map STEP: Waiting for a default service account to be provisioned in namespace [It] should forward PTR records lookup to upstream nameserver [Slow][Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:332 STEP: Finding a DNS pod Mar 21 23:29:29.853: INFO: Using DNS pod: coredns-74ff55c5b-lv4vw Mar 21 23:29:29.953: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 dns-config-map-6822 1dd600e7-1e68-4d47-ba24-8157f7c6457b 6930506 0 2021-03-21 23:29:29 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-21 23:29:29 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":10101,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xftnb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xftnb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:10101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xftnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 23:29:36.124: INFO: Created service &Service{ObjectMeta:{e2e-dns-configmap dns-config-map-6822 ad2fd16b-c311-460a-b2f4-402c43ffb715 6930687 0 2021-03-21 23:29:36 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-21 23:29:36 +0000 UTC FieldsV1 {"f:spec":{"f:ports":{".":{},"k:{\"port\":10101,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{".":{},"f:app":{}},"f:sessionAffinity":{},"f:type":{}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:10101,TargetPort:{0 10101 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: e2e-dns-configmap,},ClusterIP:10.96.212.68,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:nil,ClusterIPs:[10.96.212.68],IPFamilies:[],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} Mar 21 23:29:36.218: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-7451fbf0-2316-4d28-8078-4536dfaa59f4 dns-config-map-6822 22249156-f169-4ab9-85d2-aa0d18d16ecb 6930693 0 2021-03-21 23:29:36 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-21 23:29:36 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-d2f6w,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:default-token-xftnb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xftnb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:default-token-xftnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 23:29:44.445: INFO: ExecWithOptions {Command:[dig +short -x 8.8.8.8] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:29:44.445: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:29:44.590: INFO: Running dig: [dig +short -x 8.8.8.8], stdout: "dns.google.", stderr: "", err: STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 { health ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } forward . 10.244.1.129 }] BinaryData:map[]} Mar 21 23:29:45.306: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:29:45.306: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:29:45.559: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:29:46.561: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:29:46.561: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:01.861: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: ";; connection timed out; no servers could be reached", stderr: "", err: command terminated with exit code 9 Mar 21 23:30:02.561: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:02.561: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:05.178: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:05.561: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:05.561: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:11.738: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:12.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:12.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:14.745: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:15.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:15.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:17.685: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:18.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:18.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:20.724: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:21.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:21.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:23.713: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:24.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:24.561: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:26.742: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:27.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:27.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:29.763: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:30.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:30.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:32.754: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:33.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:33.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:35.733: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:36.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:36.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:38.744: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:39.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:39.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:41.724: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:42.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:42.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:44.715: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:45.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:45.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:47.772: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:48.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:48.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:51.244: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:51.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:51.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:53.745: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:54.561: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:54.561: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:56.758: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:30:57.561: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:30:57.561: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:30:59.740: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:00.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:00.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:02.826: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:03.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:03.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:05.694: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:06.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:06.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:08.757: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:09.561: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:09.561: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:11.770: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:12.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:12.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:15.167: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:15.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:15.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:17.813: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:18.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:18.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:20.692: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:21.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:21.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:23.768: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:24.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:24.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:26.762: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:27.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:27.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:29.725: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:30.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:30.561: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:32.783: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:33.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:33.561: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:35.676: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:36.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:36.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:38.705: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:39.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:39.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:41.765: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:42.560: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:42.560: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:44.778: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:45.561: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:45.561: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:47.703: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:47.703: INFO: ExecWithOptions {Command:[dig +short -x 192.0.2.123] Namespace:dns-config-map-6822 PodName:e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:31:47.703: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:31:49.859: INFO: Running dig: [dig +short -x 192.0.2.123], stdout: "", stderr: "", err: Mar 21 23:31:49.859: FAIL: dig result did not match: []string{} after 2m0s Full Stack Trace k8s.io/kubernetes/test/e2e/network.(*dnsPtrFwdTest).run(0xc0012a38c0, 0x7ace422000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:202 +0x58a k8s.io/kubernetes/test/e2e/network.glob..func3.2.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:334 +0x8f k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00386cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00386cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00386cd80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 21 23:31:50.790: INFO: Delete of pod dns-config-map-6822/e2e-configmap-dns-server-7451fbf0-2316-4d28-8078-4536dfaa59f4 failed: pods "e2e-configmap-dns-server-7451fbf0-2316-4d28-8078-4536dfaa59f4" not found STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } ] BinaryData:map[]} [AfterEach] Forward PTR lookup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "dns-config-map-6822". STEP: Found 11 events. Mar 21 23:31:57.890: INFO: At 2021-03-21 23:29:30 +0000 UTC - event for e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241: {default-scheduler } Scheduled: Successfully assigned dns-config-map-6822/e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 to latest-worker Mar 21 23:31:57.890: INFO: At 2021-03-21 23:29:32 +0000 UTC - event for e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:31:57.890: INFO: At 2021-03-21 23:29:33 +0000 UTC - event for e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241: {kubelet latest-worker} Created: Created container agnhost-container Mar 21 23:31:57.890: INFO: At 2021-03-21 23:29:33 +0000 UTC - event for e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241: {kubelet latest-worker} Started: Started container agnhost-container Mar 21 23:31:57.890: INFO: At 2021-03-21 23:29:36 +0000 UTC - event for e2e-configmap-dns-server-7451fbf0-2316-4d28-8078-4536dfaa59f4: {default-scheduler } Scheduled: Successfully assigned dns-config-map-6822/e2e-configmap-dns-server-7451fbf0-2316-4d28-8078-4536dfaa59f4 to latest-worker2 Mar 21 23:31:57.890: INFO: At 2021-03-21 23:29:38 +0000 UTC - event for e2e-configmap-dns-server-7451fbf0-2316-4d28-8078-4536dfaa59f4: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:31:57.890: INFO: At 2021-03-21 23:29:40 +0000 UTC - event for e2e-configmap-dns-server-7451fbf0-2316-4d28-8078-4536dfaa59f4: {kubelet latest-worker2} Created: Created container agnhost-container Mar 21 23:31:57.890: INFO: At 2021-03-21 23:29:41 +0000 UTC - event for e2e-configmap-dns-server-7451fbf0-2316-4d28-8078-4536dfaa59f4: {kubelet latest-worker2} Started: Started container agnhost-container Mar 21 23:31:57.890: INFO: At 2021-03-21 23:29:52 +0000 UTC - event for e2e-configmap-dns-server-7451fbf0-2316-4d28-8078-4536dfaa59f4: {taint-controller } TaintManagerEviction: Marking for deletion Pod dns-config-map-6822/e2e-configmap-dns-server-7451fbf0-2316-4d28-8078-4536dfaa59f4 Mar 21 23:31:57.890: INFO: At 2021-03-21 23:29:52 +0000 UTC - event for e2e-configmap-dns-server-7451fbf0-2316-4d28-8078-4536dfaa59f4: {kubelet latest-worker2} Killing: Stopping container agnhost-container Mar 21 23:31:57.890: INFO: At 2021-03-21 23:31:56 +0000 UTC - event for e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241: {kubelet latest-worker} Killing: Stopping container agnhost-container Mar 21 23:31:58.753: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:31:58.753: INFO: Mar 21 23:31:59.047: INFO: Logging node info for node latest-control-plane Mar 21 23:32:00.563: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6930471 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:29:28 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:29:28 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:29:28 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:29:28 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:32:00.564: INFO: Logging kubelet events for node latest-control-plane Mar 21 23:32:01.243: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 21 23:32:02.125: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:02.125: INFO: Container etcd ready: true, restart count 0 Mar 21 23:32:02.125: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:02.125: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:32:02.125: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:02.125: INFO: Container kube-apiserver ready: true, restart count 0 Mar 21 23:32:02.125: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:02.125: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:32:02.125: INFO: coredns-74ff55c5b-xcknl started at 2021-03-21 23:31:55 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:02.125: INFO: Container coredns ready: false, restart count 0 Mar 21 23:32:02.125: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:02.125: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 21 23:32:02.125: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:02.125: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 21 23:32:02.125: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:02.125: INFO: Container kube-scheduler ready: true, restart count 0 W0321 23:32:03.438400 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:32:05.079: INFO: Latency metrics for node latest-control-plane Mar 21 23:32:05.079: INFO: Logging node info for node latest-worker Mar 21 23:32:05.746: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6934503 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 19:07:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:23:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:28:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:28:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:28:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:28:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:32:05.747: INFO: Logging kubelet events for node latest-worker Mar 21 23:32:06.744: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 21 23:32:06.954: INFO: overcommit-15 started at (0+0 container statuses recorded) Mar 21 23:32:06.954: INFO: overcommit-19 started at (0+0 container statuses recorded) Mar 21 23:32:06.954: INFO: pod3 started at 2021-03-21 23:31:42 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:06.954: INFO: Container agnhost ready: true, restart count 0 Mar 21 23:32:06.954: INFO: overcommit-18 started at (0+0 container statuses recorded) Mar 21 23:32:06.955: INFO: chaos-daemon-qkndt started at 2021-03-21 18:05:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:06.955: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:32:06.955: INFO: overcommit-13 started at (0+0 container statuses recorded) Mar 21 23:32:06.955: INFO: kindnet-sbskd started at 2021-03-21 18:05:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:06.955: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:32:06.955: INFO: overcommit-14 started at (0+0 container statuses recorded) Mar 21 23:32:06.955: INFO: overcommit-9 started at 2021-03-21 23:31:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:06.955: INFO: Container overcommit-9 ready: false, restart count 0 Mar 21 23:32:06.955: INFO: inclusterclient started at 2021-03-21 23:22:57 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:06.955: INFO: Container inclusterclient ready: true, restart count 0 Mar 21 23:32:06.955: INFO: pfpod started at 2021-03-21 23:31:39 +0000 UTC (0+2 container statuses recorded) Mar 21 23:32:06.955: INFO: Container portforwardtester ready: false, restart count 0 Mar 21 23:32:06.955: INFO: Container readiness ready: true, restart count 0 Mar 21 23:32:06.955: INFO: ss2-2 started at 2021-03-21 23:30:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:06.955: INFO: Container webserver ready: false, restart count 0 Mar 21 23:32:06.955: INFO: pfpod started at 2021-03-21 23:31:09 +0000 UTC (0+2 container statuses recorded) Mar 21 23:32:06.955: INFO: Container portforwardtester ready: false, restart count 0 Mar 21 23:32:06.955: INFO: Container readiness ready: false, restart count 0 Mar 21 23:32:06.955: INFO: ss2-0 started at 2021-03-21 23:30:00 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:06.955: INFO: Container webserver ready: true, restart count 0 Mar 21 23:32:06.955: INFO: pod1 started at 2021-03-21 23:31:29 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:06.955: INFO: Container agnhost ready: true, restart count 0 Mar 21 23:32:06.955: INFO: pod2 started at 2021-03-21 23:31:35 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:06.955: INFO: Container agnhost ready: true, restart count 0 Mar 21 23:32:06.955: INFO: overcommit-11 started at 2021-03-21 23:31:55 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:06.955: INFO: Container overcommit-11 ready: false, restart count 0 Mar 21 23:32:06.955: INFO: chaos-controller-manager-69c479c674-7xglh started at 2021-03-21 23:27:10 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:06.955: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:32:06.955: INFO: overcommit-12 started at 2021-03-21 23:31:55 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:06.955: INFO: Container overcommit-12 ready: false, restart count 0 Mar 21 23:32:06.955: INFO: ss2-1 started at 2021-03-21 23:28:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:06.955: INFO: Container webserver ready: true, restart count 0 Mar 21 23:32:06.955: INFO: overcommit-17 started at (0+0 container statuses recorded) Mar 21 23:32:06.955: INFO: overcommit-16 started at (0+0 container statuses recorded) Mar 21 23:32:06.955: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:06.955: INFO: Container kube-proxy ready: true, restart count 0 W0321 23:32:07.964689 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:32:09.523: INFO: Latency metrics for node latest-worker Mar 21 23:32:09.523: INFO: Logging node info for node latest-worker2 Mar 21 23:32:09.980: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6934459 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 18:46:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:23:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:28:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:28:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:28:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:28:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:32:09.981: INFO: Logging kubelet events for node latest-worker2 Mar 21 23:32:10.481: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 21 23:32:10.941: INFO: overcommit-7 started at 2021-03-21 23:31:53 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:10.941: INFO: Container overcommit-7 ready: false, restart count 0 Mar 21 23:32:10.941: INFO: overcommit-8 started at 2021-03-21 23:31:53 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:10.941: INFO: Container overcommit-8 ready: false, restart count 0 Mar 21 23:32:10.941: INFO: chaos-daemon-wl4fl started at 2021-03-21 23:31:45 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:10.941: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:32:10.941: INFO: overcommit-2 started at 2021-03-21 23:31:52 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:10.941: INFO: Container overcommit-2 ready: false, restart count 0 Mar 21 23:32:10.941: INFO: overcommit-4 started at 2021-03-21 23:31:52 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:10.941: INFO: Container overcommit-4 ready: false, restart count 0 Mar 21 23:32:10.941: INFO: overcommit-6 started at 2021-03-21 23:31:53 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:10.941: INFO: Container overcommit-6 ready: false, restart count 0 Mar 21 23:32:10.941: INFO: kindnet-vhlbm started at 2021-03-21 23:31:45 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:10.941: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:32:10.941: INFO: coredns-74ff55c5b-7tkvj started at 2021-03-21 23:31:52 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:10.941: INFO: Container coredns ready: false, restart count 0 Mar 21 23:32:10.941: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:10.941: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:32:10.941: INFO: pfpod started at 2021-03-21 23:32:08 +0000 UTC (0+2 container statuses recorded) Mar 21 23:32:10.941: INFO: Container portforwardtester ready: false, restart count 0 Mar 21 23:32:10.941: INFO: Container readiness ready: false, restart count 0 Mar 21 23:32:10.941: INFO: overcommit-3 started at 2021-03-21 23:31:52 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:10.941: INFO: Container overcommit-3 ready: false, restart count 0 Mar 21 23:32:10.941: INFO: overcommit-10 started at 2021-03-21 23:31:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:10.941: INFO: Container overcommit-10 ready: false, restart count 0 Mar 21 23:32:10.941: INFO: overcommit-1 started at 2021-03-21 23:31:52 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:10.941: INFO: Container overcommit-1 ready: false, restart count 0 Mar 21 23:32:10.941: INFO: overcommit-5 started at 2021-03-21 23:31:53 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:10.941: INFO: Container overcommit-5 ready: false, restart count 0 Mar 21 23:32:10.941: INFO: overcommit-0 started at 2021-03-21 23:31:51 +0000 UTC (0+1 container statuses recorded) Mar 21 23:32:10.941: INFO: Container overcommit-0 ready: true, restart count 0 W0321 23:32:12.692654 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:32:14.568: INFO: Latency metrics for node latest-worker2 Mar 21 23:32:14.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-config-map-6822" for this suite. • Failure [166.592 seconds] [sig-network] DNS configMap nameserver /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Forward PTR lookup /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:329 should forward PTR records lookup to upstream nameserver [Slow][Serial] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:332 Mar 21 23:31:49.859: dig result did not match: []string{} after 2m0s /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:202 ------------------------------ {"msg":"FAILED [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","total":54,"completed":7,"skipped":1382,"failed":3,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking should check kube-proxy urls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:32:16.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should check kube-proxy urls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138 STEP: Performing setup for networking test in namespace nettest-6407 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 21 23:32:18.886: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 21 23:32:19.696: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:32:21.963: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:32:24.381: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:32:25.906: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:32:27.850: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:32:29.995: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:32:31.830: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:32:33.767: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:32:35.823: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:32:37.745: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 21 23:32:37.887: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 21 23:32:39.988: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 21 23:32:41.941: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 21 23:32:44.253: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 21 23:32:53.817: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 21 23:32:53.817: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 21 23:32:54.501: INFO: Service node-port-service in namespace nettest-6407 found. Mar 21 23:32:54.957: INFO: Service session-affinity-service in namespace nettest-6407 found. STEP: Waiting for NodePort service to expose endpoint Mar 21 23:32:55.978: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 21 23:32:57.009: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: checking kube-proxy URLs STEP: Getting kube-proxy self URL /healthz Mar 21 23:32:57.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=nettest-6407 exec host-test-container-pod -- /bin/sh -x -c curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz' Mar 21 23:32:57.899: INFO: stderr: "+ curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz\n" Mar 21 23:32:57.899: INFO: stdout: "HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nX-Content-Type-Options: nosniff\r\nDate: Sun, 21 Mar 2021 23:32:57 GMT\r\nContent-Length: 157\r\n\r\n{\"lastUpdated\": \"2021-03-21 23:32:57.889863098 +0000 UTC m=+2640040.973466258\",\"currentTime\": \"2021-03-21 23:32:57.889863098 +0000 UTC m=+2640040.973466258\"}" STEP: Getting kube-proxy self URL /healthz Mar 21 23:32:57.899: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=nettest-6407 exec host-test-container-pod -- /bin/sh -x -c curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz' Mar 21 23:32:58.177: INFO: stderr: "+ curl -i -q -s --connect-timeout 1 http://localhost:10256/healthz\n" Mar 21 23:32:58.177: INFO: stdout: "HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nX-Content-Type-Options: nosniff\r\nDate: Sun, 21 Mar 2021 23:32:58 GMT\r\nContent-Length: 157\r\n\r\n{\"lastUpdated\": \"2021-03-21 23:32:58.169493711 +0000 UTC m=+2640041.253096866\",\"currentTime\": \"2021-03-21 23:32:58.169493711 +0000 UTC m=+2640041.253096866\"}" STEP: Checking status code against http://localhost:10249/proxyMode Mar 21 23:32:58.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=nettest-6407 exec host-test-container-pod -- /bin/sh -x -c curl -o /dev/null -i -q -s -w %{http_code} --connect-timeout 1 http://localhost:10249/proxyMode' Mar 21 23:32:58.880: INFO: stderr: "+ curl -o /dev/null -i -q -s -w '%{http_code}' --connect-timeout 1 http://localhost:10249/proxyMode\n" Mar 21 23:32:58.880: INFO: stdout: "200" [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:32:58.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-6407" for this suite. • [SLOW TEST:43.224 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should check kube-proxy urls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138 ------------------------------ {"msg":"PASSED [sig-network] Networking should check kube-proxy urls","total":54,"completed":8,"skipped":1474,"failed":3,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]"]} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be rejected when no endpoints exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:32:59.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be rejected when no endpoints exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968 STEP: creating a service with no endpoints STEP: creating execpod-noendpoints on node latest-worker Mar 21 23:32:59.887: INFO: Creating new exec pod Mar 21 23:33:06.652: INFO: waiting up to 30s to connect to no-pods:80 STEP: hitting service no-pods:80 from pod execpod-noendpoints on node latest-worker Mar 21 23:33:06.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-2190 exec execpod-noendpointsphrwm -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80' Mar 21 23:33:09.098: INFO: rc: 1 Mar 21 23:33:09.098: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-2190 exec execpod-noendpointsphrwm -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80: Command stdout: stderr: + /agnhost connect '--timeout=3s' no-pods:80 REFUSED command terminated with exit code 1 error: exit status 1 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:33:09.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2190" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:10.536 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be rejected when no endpoints exist /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968 ------------------------------ {"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":54,"completed":9,"skipped":1487,"failed":3,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] ESIPP [Slow] should work from pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036 [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:33:09.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 Mar 21 23:33:10.863: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:33:10.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-8164" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [1.505 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should work from pods [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should prevent NodePort collisions /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:33:11.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should prevent NodePort collisions /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440 STEP: creating service nodeport-collision-1 with type NodePort in namespace services-5093 STEP: creating service nodeport-collision-2 with conflicting NodePort STEP: deleting service nodeport-collision-1 to release NodePort STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort STEP: deleting service nodeport-collision-2 in namespace services-5093 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:33:27.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5093" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:16.752 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should prevent NodePort collisions /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440 ------------------------------ {"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":54,"completed":10,"skipped":1634,"failed":3,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]"]} SSSSSSS ------------------------------ [sig-network] Services should be able to up and down services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:33:28.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to up and down services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015 STEP: creating up-down-1 in namespace services-3785 STEP: creating service up-down-1 in namespace services-3785 STEP: creating replication controller up-down-1 in namespace services-3785 I0321 23:33:30.724794 7 runners.go:190] Created replication controller with name: up-down-1, namespace: services-3785, replica count: 3 I0321 23:33:33.776036 7 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 23:33:36.776996 7 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 23:33:39.777465 7 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: creating up-down-2 in namespace services-3785 STEP: creating service up-down-2 in namespace services-3785 STEP: creating replication controller up-down-2 in namespace services-3785 I0321 23:33:40.337147 7 runners.go:190] Created replication controller with name: up-down-2, namespace: services-3785, replica count: 3 I0321 23:33:43.389037 7 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 23:33:46.389204 7 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 23:33:49.389841 7 runners.go:190] up-down-2 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 23:33:52.390613 7 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: verifying service up-down-1 is up Mar 21 23:33:53.032: INFO: Creating new host exec pod Mar 21 23:33:53.555: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:33:55.675: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:33:58.174: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:33:59.959: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Mar 21 23:33:59.959: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Mar 21 23:34:11.174: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.126.194:80 2>&1 || true; echo; done" in pod services-3785/verify-service-up-host-exec-pod Mar 21 23:34:11.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-3785 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.126.194:80 2>&1 || true; echo; done' Mar 21 23:34:13.589: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n" Mar 21 23:34:13.589: INFO: stdout: "up-down-1-nds9x\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-nds9x\n" Mar 21 23:34:13.589: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.126.194:80 2>&1 || true; echo; done" in pod services-3785/verify-service-up-exec-pod-7j5lm Mar 21 23:34:13.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-3785 exec verify-service-up-exec-pod-7j5lm -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.126.194:80 2>&1 || true; echo; done' Mar 21 23:34:14.777: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.126.194:80\n+ echo\n" Mar 21 23:34:14.777: INFO: stdout: "up-down-1-d658f\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-d658f\nup-down-1-glsqr\nup-down-1-nds9x\nup-down-1-nds9x\nup-down-1-glsqr\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3785 STEP: Deleting pod verify-service-up-exec-pod-7j5lm in namespace services-3785 STEP: verifying service up-down-2 is up Mar 21 23:34:17.966: INFO: Creating new host exec pod Mar 21 23:34:19.463: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:34:22.526: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:34:23.982: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:34:26.085: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:34:27.550: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Mar 21 23:34:27.550: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Mar 21 23:34:36.011: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.179.248:80 2>&1 || true; echo; done" in pod services-3785/verify-service-up-host-exec-pod Mar 21 23:34:36.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-3785 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.179.248:80 2>&1 || true; echo; done' Mar 21 23:34:39.104: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n" Mar 21 23:34:39.104: INFO: stdout: "up-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\n" Mar 21 23:34:39.104: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.179.248:80 2>&1 || true; echo; done" in pod services-3785/verify-service-up-exec-pod-t8kc7 Mar 21 23:34:39.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-3785 exec verify-service-up-exec-pod-t8kc7 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.179.248:80 2>&1 || true; echo; done' Mar 21 23:34:40.355: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n" Mar 21 23:34:40.355: INFO: stdout: "up-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3785 STEP: Deleting pod verify-service-up-exec-pod-t8kc7 in namespace services-3785 STEP: stopping service up-down-1 STEP: deleting ReplicationController up-down-1 in namespace services-3785, will wait for the garbage collector to delete the pods Mar 21 23:34:50.149: INFO: Deleting ReplicationController up-down-1 took: 2.330886229s Mar 21 23:34:51.149: INFO: Terminating ReplicationController up-down-1 pods took: 1.000278224s STEP: verifying service up-down-1 is not up Mar 21 23:35:27.084: INFO: Creating new host exec pod Mar 21 23:35:27.809: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:35:29.988: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:35:31.813: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Mar 21 23:35:31.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-3785 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.126.194:80 && echo service-down-failed' Mar 21 23:35:34.105: INFO: rc: 28 Mar 21 23:35:34.105: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.126.194:80 && echo service-down-failed" in pod services-3785/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-3785 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.126.194:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.96.126.194:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3785 STEP: verifying service up-down-2 is still up Mar 21 23:35:34.772: INFO: Creating new host exec pod Mar 21 23:35:35.305: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:35:37.344: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:35:39.502: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:35:41.582: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Mar 21 23:35:41.582: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Mar 21 23:35:48.381: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.179.248:80 2>&1 || true; echo; done" in pod services-3785/verify-service-up-host-exec-pod Mar 21 23:35:48.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-3785 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.179.248:80 2>&1 || true; echo; done' Mar 21 23:35:49.086: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n" Mar 21 23:35:49.086: INFO: stdout: "up-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\n" Mar 21 23:35:49.086: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.179.248:80 2>&1 || true; echo; done" in pod services-3785/verify-service-up-exec-pod-jv8lt Mar 21 23:35:49.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-3785 exec verify-service-up-exec-pod-jv8lt -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.179.248:80 2>&1 || true; echo; done' Mar 21 23:35:49.687: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n" Mar 21 23:35:49.687: INFO: stdout: "up-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3785 STEP: Deleting pod verify-service-up-exec-pod-jv8lt in namespace services-3785 STEP: creating service up-down-3 in namespace services-3785 STEP: creating service up-down-3 in namespace services-3785 STEP: creating replication controller up-down-3 in namespace services-3785 I0321 23:35:51.628737 7 runners.go:190] Created replication controller with name: up-down-3, namespace: services-3785, replica count: 3 I0321 23:35:54.679746 7 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 23:35:57.680363 7 runners.go:190] up-down-3 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 23:36:00.681037 7 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: verifying service up-down-2 is still up Mar 21 23:36:00.686: INFO: Creating new host exec pod Mar 21 23:36:00.861: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:36:03.174: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:36:04.895: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Mar 21 23:36:04.895: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Mar 21 23:36:09.108: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.179.248:80 2>&1 || true; echo; done" in pod services-3785/verify-service-up-host-exec-pod Mar 21 23:36:09.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-3785 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.179.248:80 2>&1 || true; echo; done' Mar 21 23:36:09.746: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n" Mar 21 23:36:09.746: INFO: stdout: "up-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\n" Mar 21 23:36:09.746: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.179.248:80 2>&1 || true; echo; done" in pod services-3785/verify-service-up-exec-pod-qffdq Mar 21 23:36:09.746: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-3785 exec verify-service-up-exec-pod-qffdq -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.179.248:80 2>&1 || true; echo; done' Mar 21 23:36:10.442: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.179.248:80\n+ echo\n" Mar 21 23:36:10.442: INFO: stdout: "up-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-ll5sq\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-t52rr\nup-down-2-8dhrn\nup-down-2-8dhrn\nup-down-2-ll5sq\nup-down-2-ll5sq\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3785 STEP: Deleting pod verify-service-up-exec-pod-qffdq in namespace services-3785 STEP: verifying service up-down-3 is up Mar 21 23:36:11.152: INFO: Creating new host exec pod Mar 21 23:36:11.474: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:36:13.479: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:36:16.150: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:36:18.284: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:36:19.761: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:36:21.795: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:36:24.265: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Mar 21 23:36:24.265: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Mar 21 23:36:31.141: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.92.41:80 2>&1 || true; echo; done" in pod services-3785/verify-service-up-host-exec-pod Mar 21 23:36:31.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-3785 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.92.41:80 2>&1 || true; echo; done' Mar 21 23:36:32.074: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n" Mar 21 23:36:32.074: INFO: stdout: "up-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-9z77c\n" Mar 21 23:36:32.074: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.92.41:80 2>&1 || true; echo; done" in pod services-3785/verify-service-up-exec-pod-8xqx7 Mar 21 23:36:32.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-3785 exec verify-service-up-exec-pod-8xqx7 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.92.41:80 2>&1 || true; echo; done' Mar 21 23:36:33.316: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.92.41:80\n+ echo\n" Mar 21 23:36:33.316: INFO: stdout: "up-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-zkvdj\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-9z77c\nup-down-3-x4bt5\nup-down-3-x4bt5\nup-down-3-zkvdj\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-3785 STEP: Deleting pod verify-service-up-exec-pod-8xqx7 in namespace services-3785 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:36:34.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3785" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:187.749 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to up and down services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to up and down services","total":54,"completed":11,"skipped":1641,"failed":3,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should allow pods to hairpin back to themselves through services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:36:35.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should allow pods to hairpin back to themselves through services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986 STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-9522 Mar 21 23:36:38.611: INFO: hairpin-test cluster ip: 10.96.229.49 STEP: creating a client/server pod Mar 21 23:36:39.497: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:36:41.590: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:36:43.703: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:36:45.745: INFO: The status of Pod hairpin is Running (Ready = true) STEP: waiting for the service to expose an endpoint STEP: waiting up to 3m0s for service hairpin-test in namespace services-9522 to expose endpoints map[hairpin:[8080]] Mar 21 23:36:46.150: INFO: successfully validated that service hairpin-test in namespace services-9522 exposes endpoints map[hairpin:[8080]] STEP: Checking if the pod can reach itself E0321 23:36:46.152550 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:36:47.467241 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:36:49.793511 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:36:54.822035 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:37:05.578841 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:37:28.400452 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:38:02.374793 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 21 23:38:46.151: FAIL: Unexpected error: <*errors.errorString | 0xc0026f8020>: { s: "no subset of available IP address found for the endpoint hairpin-test within timeout 2m0s", } no subset of available IP address found for the endpoint hairpin-test within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.7() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1012 +0x6a5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00386cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00386cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00386cd80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-9522". STEP: Found 4 events. Mar 21 23:38:46.210: INFO: At 2021-03-21 23:36:39 +0000 UTC - event for hairpin: {default-scheduler } Scheduled: Successfully assigned services-9522/hairpin to latest-worker2 Mar 21 23:38:46.210: INFO: At 2021-03-21 23:36:42 +0000 UTC - event for hairpin: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:38:46.210: INFO: At 2021-03-21 23:36:43 +0000 UTC - event for hairpin: {kubelet latest-worker2} Created: Created container agnhost-container Mar 21 23:38:46.210: INFO: At 2021-03-21 23:36:44 +0000 UTC - event for hairpin: {kubelet latest-worker2} Started: Started container agnhost-container Mar 21 23:38:46.267: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:38:46.267: INFO: hairpin latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:36:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:36:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:36:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:36:39 +0000 UTC }] Mar 21 23:38:46.267: INFO: Mar 21 23:38:46.364: INFO: Logging node info for node latest-control-plane Mar 21 23:38:46.453: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6937825 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:34:29 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:34:29 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:34:29 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:34:29 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:38:46.454: INFO: Logging kubelet events for node latest-control-plane Mar 21 23:38:46.493: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 21 23:38:46.589: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:46.589: INFO: Container etcd ready: true, restart count 0 Mar 21 23:38:46.589: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:46.589: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:38:46.589: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:46.589: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 21 23:38:46.589: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:46.589: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 21 23:38:46.589: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:46.589: INFO: Container kube-scheduler ready: true, restart count 0 Mar 21 23:38:46.589: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:46.589: INFO: Container kube-apiserver ready: true, restart count 0 Mar 21 23:38:46.589: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:46.589: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:38:46.589: INFO: coredns-74ff55c5b-xcknl started at 2021-03-21 23:31:55 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:46.589: INFO: Container coredns ready: true, restart count 0 W0321 23:38:46.630246 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:38:46.779: INFO: Latency metrics for node latest-control-plane Mar 21 23:38:46.779: INFO: Logging node info for node latest-worker Mar 21 23:38:46.801: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6941925 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 19:07:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:23:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:38:10 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:38:10 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:38:10 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:38:10 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:38:46.802: INFO: Logging kubelet events for node latest-worker Mar 21 23:38:46.890: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 21 23:38:47.284: INFO: kindnet-sbskd started at 2021-03-21 18:05:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:38:47.284: INFO: up-down-2-t52rr started at 2021-03-21 23:33:40 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container up-down-2 ready: false, restart count 0 Mar 21 23:38:47.284: INFO: pod-61465ce3-09e1-450a-8a48-ae88a5206db4 started at 2021-03-21 23:34:01 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:38:47.284: INFO: pod-5e7513fa-2103-4670-8f8a-3c582b187dcd started at 2021-03-21 23:34:06 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:38:47.284: INFO: pod-98e38606-15d4-41e4-aa7f-32b8916dbbea started at 2021-03-21 23:34:01 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:38:47.284: INFO: pod-b3b3ce98-d5c7-475d-87a1-c6ed5076711e started at 2021-03-21 23:34:01 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:38:47.284: INFO: chaos-controller-manager-69c479c674-7xglh started at 2021-03-21 23:27:10 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:38:47.284: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:38:47.284: INFO: pod-8d95713e-1a47-404b-a862-abfdb60ede75 started at 2021-03-21 23:34:01 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:38:47.284: INFO: pod-ed9a418c-2df1-48f0-b932-e010172a8189 started at 2021-03-21 23:34:02 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:38:47.284: INFO: pod-ec8e9cdd-52f5-441a-91cb-86be089e2481 started at 2021-03-21 23:34:01 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:38:47.284: INFO: pod-with-pod-antiaffinity started at 2021-03-21 23:35:42 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container pod-with-pod-antiaffinity ready: false, restart count 0 Mar 21 23:38:47.284: INFO: pod-d4c825ca-2f98-4a30-8325-8910ea310f21 started at 2021-03-21 23:34:00 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:38:47.284: INFO: pod-f4a4a7d1-da33-48cd-a596-b33e45b2af2b started at 2021-03-21 23:34:01 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:38:47.284: INFO: pod-ebd1422a-1278-498a-ab80-b9faa6216c77 started at 2021-03-21 23:34:00 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:38:47.284: INFO: chaos-daemon-qkndt started at 2021-03-21 18:05:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:38:47.284: INFO: pod-da4759bd-d4df-401a-a992-7300350783fe started at 2021-03-21 23:34:06 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:47.284: INFO: Container write-pod ready: false, restart count 0 W0321 23:38:47.298821 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:38:47.892: INFO: Latency metrics for node latest-worker Mar 21 23:38:47.892: INFO: Logging node info for node latest-worker2 Mar 21 23:38:47.937: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6941952 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:37:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:37:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:37:50 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:37:50 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:37:50 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:37:50 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:38:47.938: INFO: Logging kubelet events for node latest-worker2 Mar 21 23:38:47.995: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 21 23:38:48.026: INFO: ss2-0 started at 2021-03-21 23:36:16 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:48.026: INFO: Container webserver ready: false, restart count 0 Mar 21 23:38:48.026: INFO: hairpin started at 2021-03-21 23:36:39 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:48.026: INFO: Container agnhost-container ready: true, restart count 0 Mar 21 23:38:48.026: INFO: pod-submit-status-1-5 started at 2021-03-21 23:38:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:48.026: INFO: Container busybox ready: false, restart count 0 Mar 21 23:38:48.026: INFO: pod-submit-status-2-5 started at 2021-03-21 23:38:35 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:48.026: INFO: Container busybox ready: false, restart count 0 Mar 21 23:38:48.026: INFO: chaos-daemon-wl4fl started at 2021-03-21 23:31:45 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:48.026: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:38:48.026: INFO: kindnet-vhlbm started at 2021-03-21 23:31:45 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:48.026: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:38:48.026: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:48.026: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:38:48.026: INFO: coredns-74ff55c5b-7tkvj started at 2021-03-21 23:31:52 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:48.026: INFO: Container coredns ready: true, restart count 0 Mar 21 23:38:48.026: INFO: pod-submit-status-0-4 started at 2021-03-21 23:38:37 +0000 UTC (0+1 container statuses recorded) Mar 21 23:38:48.026: INFO: Container busybox ready: false, restart count 0 W0321 23:38:48.086324 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:38:48.487: INFO: Latency metrics for node latest-worker2 Mar 21 23:38:48.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9522" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [132.657 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should allow pods to hairpin back to themselves through services [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986 Mar 21 23:38:46.151: Unexpected error: <*errors.errorString | 0xc0026f8020>: { s: "no subset of available IP address found for the endpoint hairpin-test within timeout 2m0s", } no subset of available IP address found for the endpoint hairpin-test within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1012 ------------------------------ {"msg":"FAILED [sig-network] Services should allow pods to hairpin back to themselves through services","total":54,"completed":11,"skipped":1735,"failed":4,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] ESIPP [Slow] should work for type=NodePort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927 [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:38:48.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename esipp STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858 Mar 21 23:38:48.718: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:38:48.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "esipp-419" for this suite. [AfterEach] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866 S [SKIPPING] in Spec Setup (BeforeEach) [0.294 seconds] [sig-network] ESIPP [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should work for type=NodePort [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should release NodePorts on delete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:38:48.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should release NodePorts on delete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561 STEP: creating service nodeport-reuse with type NodePort in namespace services-1296 STEP: deleting original service nodeport-reuse Mar 21 23:38:50.154: INFO: Creating new host exec pod Mar 21 23:38:50.231: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:38:52.474: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:38:54.327: INFO: The status of Pod hostexec is Running (Ready = true) Mar 21 23:38:54.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-1296 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :30614' | tail -n +2 | grep LISTEN' Mar 21 23:38:54.630: INFO: stderr: "+ ss -ant46 'sport = :30614'\n+ tail -n +2\n+ grep LISTEN\n" Mar 21 23:38:54.630: INFO: stdout: "" STEP: creating service nodeport-reuse with same NodePort 30614 STEP: deleting service nodeport-reuse in namespace services-1296 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:38:55.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1296" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:6.595 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should release NodePorts on delete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561 ------------------------------ {"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":54,"completed":12,"skipped":1940,"failed":4,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:38:55.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should function for endpoint-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256 STEP: Performing setup for networking test in namespace nettest-5149 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 21 23:38:55.778: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 21 23:38:56.406: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:38:59.051: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:39:00.703: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:39:02.640: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:39:04.435: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:39:06.568: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:39:08.461: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:39:10.431: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:39:12.432: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:39:14.571: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 21 23:39:14.998: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 21 23:39:17.003: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 21 23:39:23.281: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 21 23:39:23.281: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 21 23:39:23.566: INFO: Service node-port-service in namespace nettest-5149 found. Mar 21 23:39:23.777: INFO: Service session-affinity-service in namespace nettest-5149 found. STEP: Waiting for NodePort service to expose endpoint Mar 21 23:39:24.815: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 21 23:39:25.834: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(udp) netserver-0 (endpoint) --> 10.96.144.73:90 (config.clusterIP) Mar 21 23:39:25.913: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.118:8080/dial?request=hostname&protocol=udp&host=10.96.144.73&port=90&tries=1'] Namespace:nettest-5149 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:39:25.913: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:39:26.085: INFO: Waiting for responses: map[netserver-1:{}] Mar 21 23:39:28.213: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.118:8080/dial?request=hostname&protocol=udp&host=10.96.144.73&port=90&tries=1'] Namespace:nettest-5149 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:39:28.213: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:39:28.463: INFO: Waiting for responses: map[netserver-1:{}] Mar 21 23:39:30.584: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.118:8080/dial?request=hostname&protocol=udp&host=10.96.144.73&port=90&tries=1'] Namespace:nettest-5149 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:39:30.584: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:39:30.759: INFO: Waiting for responses: map[] Mar 21 23:39:30.759: INFO: reached 10.96.144.73 after 2/34 tries STEP: dialing(udp) netserver-0 (endpoint) --> 172.18.0.9:30176 (nodeIP) Mar 21 23:39:30.773: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.118:8080/dial?request=hostname&protocol=udp&host=172.18.0.9&port=30176&tries=1'] Namespace:nettest-5149 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:39:30.773: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:39:31.299: INFO: Waiting for responses: map[netserver-1:{}] Mar 21 23:39:33.347: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.118:8080/dial?request=hostname&protocol=udp&host=172.18.0.9&port=30176&tries=1'] Namespace:nettest-5149 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:39:33.348: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:39:33.489: INFO: Waiting for responses: map[netserver-1:{}] Mar 21 23:39:35.585: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.118:8080/dial?request=hostname&protocol=udp&host=172.18.0.9&port=30176&tries=1'] Namespace:nettest-5149 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:39:35.585: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:39:36.100: INFO: Waiting for responses: map[netserver-1:{}] Mar 21 23:39:38.636: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.118:8080/dial?request=hostname&protocol=udp&host=172.18.0.9&port=30176&tries=1'] Namespace:nettest-5149 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:39:38.636: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:39:38.802: INFO: Waiting for responses: map[] Mar 21 23:39:38.802: INFO: reached 172.18.0.9 after 3/34 tries [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:39:38.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-5149" for this suite. • [SLOW TEST:44.482 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should function for endpoint-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp","total":54,"completed":13,"skipped":1956,"failed":4,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod resolv.conf /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:39:39.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod resolv.conf /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458 STEP: Preparing a test DNS service with injected DNS names... Mar 21 23:39:41.018: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-3aff70ed-a166-4420-8652-23c23d867d42 dns-6094 fb153680-4332-4c86-9d3d-e1262f2c03f3 6944195 0 2021-03-21 23:39:40 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-21 23:39:40 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-m6fjx,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:default-token-bgbbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bgbbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:default-token-bgbbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 21 23:39:47.565: INFO: testServerIP is 10.244.1.206 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 21 23:39:47.615: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils dns-6094 a444ad48-9f26-4e26-ab86-b174e34022e9 6944372 0 2021-03-21 23:39:47 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-21 23:39:47 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bgbbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bgbbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bgbbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.1.206],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS option is configured on pod... Mar 21 23:39:55.841: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-6094 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:39:55.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized name server and search path are working... Mar 21 23:39:56.182: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-6094 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:39:56.182: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:39:56.574: INFO: Deleting pod e2e-dns-utils... Mar 21 23:39:56.829: INFO: Deleting pod e2e-configmap-dns-server-3aff70ed-a166-4420-8652-23c23d867d42... Mar 21 23:39:58.645: INFO: Deleting configmap e2e-coredns-configmap-m6fjx... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:39:59.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6094" for this suite. • [SLOW TEST:21.079 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should support configurable pod resolv.conf /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":54,"completed":14,"skipped":1993,"failed":4,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Provider:GCE] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:40:00.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Provider:GCE] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 Mar 21 23:40:02.439: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:40:02.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6020" for this suite. S [SKIPPING] [1.649 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Provider:GCE] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53 [BeforeEach] [sig-network] KubeProxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:40:02.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kube-proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should set TCP CLOSE_WAIT timeout [Privileged] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53 Mar 21 23:40:03.149: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:40:05.292: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:40:07.824: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:40:10.033: INFO: The status of Pod e2e-net-exec is Running (Ready = true) STEP: Launching a server daemon on node latest-worker2 (node ip: 172.18.0.13, image: k8s.gcr.io/e2e-test-images/agnhost:2.28) Mar 21 23:40:11.142: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:40:13.882: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:40:15.561: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:40:17.699: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:40:19.162: INFO: The status of Pod e2e-net-server is Running (Ready = true) STEP: Launching a client connection on node latest-worker (node ip: 172.18.0.9, image: k8s.gcr.io/e2e-test-images/agnhost:2.28) Mar 21 23:40:21.354: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:40:23.594: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:40:25.788: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:40:27.436: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:40:29.441: INFO: The status of Pod e2e-net-client is Running (Ready = true) STEP: Checking conntrack entries for the timeout Mar 21 23:40:29.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kube-proxy-1180 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 172.18.0.13 | grep -m 1 'CLOSE_WAIT.*dport=11302' ' Mar 21 23:40:39.811: INFO: stderr: "+ conntrack -L -f ipv4 -d 172.18.0.13\n+ grep -m 1 CLOSE_WAIT.*dport=11302\nconntrack v1.4.5 (conntrack-tools): 1 flow entries have been shown.\n" Mar 21 23:40:39.811: INFO: stdout: "tcp 6 3588 CLOSE_WAIT src=10.244.2.132 dst=172.18.0.13 sport=40240 dport=11302 src=172.18.0.13 dst=172.18.0.9 sport=11302 dport=40240 [ASSURED] mark=0 use=1\n" Mar 21 23:40:39.811: INFO: conntrack entry for node 172.18.0.13 and port 11302: tcp 6 3588 CLOSE_WAIT src=10.244.2.132 dst=172.18.0.13 sport=40240 dport=11302 src=172.18.0.13 dst=172.18.0.9 sport=11302 dport=40240 [ASSURED] mark=0 use=1 [AfterEach] [sig-network] KubeProxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:40:39.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kube-proxy-1180" for this suite. • [SLOW TEST:37.507 seconds] [sig-network] KubeProxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should set TCP CLOSE_WAIT timeout [Privileged] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53 ------------------------------ {"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":54,"completed":15,"skipped":2338,"failed":4,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Loadbalancing: L7 [Slow] Nginx should conform to Ingress spec /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722 [BeforeEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:40:40.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69 Mar 21 23:40:40.439: INFO: Found ClusterRoles; assuming RBAC is enabled. [BeforeEach] [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688 Mar 21 23:40:40.573: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706 STEP: No ingress created, no cleanup necessary [AfterEach] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:40:40.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-6020" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.496 seconds] [sig-network] Loadbalancing: L7 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 [Slow] Nginx /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685 should conform to Ingress spec [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:40:40.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to update service type to NodePort listening on same port number but different protocols /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211 STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-7911 Mar 21 23:40:40.893: INFO: Service Port TCP: 80 STEP: changing the TCP service to type=NodePort STEP: creating replication controller nodeport-update-service in namespace services-7911 I0321 23:40:41.112681 7 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-7911, replica count: 2 I0321 23:40:44.164207 7 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 23:40:47.164498 7 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 23:40:50.164692 7 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 21 23:40:50.164: INFO: Creating new exec pod E0321 23:40:56.321268 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:40:57.785342 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:41:00.792128 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:41:06.848478 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:41:17.839151 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:41:38.854261 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0321 23:42:22.156606 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 21 23:42:56.320: FAIL: Unexpected error: <*errors.errorString | 0xc002128020>: { s: "no subset of available IP address found for the endpoint nodeport-update-service within timeout 2m0s", } no subset of available IP address found for the endpoint nodeport-update-service within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.13() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00386cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00386cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00386cd80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 21 23:42:56.320: INFO: Cleaning up the updating NodePorts test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-7911". STEP: Found 14 events. Mar 21 23:42:57.102: INFO: At 2021-03-21 23:40:41 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-866r2 Mar 21 23:42:57.102: INFO: At 2021-03-21 23:40:41 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-7gzbm Mar 21 23:42:57.102: INFO: At 2021-03-21 23:40:41 +0000 UTC - event for nodeport-update-service-7gzbm: {default-scheduler } Scheduled: Successfully assigned services-7911/nodeport-update-service-7gzbm to latest-worker2 Mar 21 23:42:57.102: INFO: At 2021-03-21 23:40:41 +0000 UTC - event for nodeport-update-service-866r2: {default-scheduler } Scheduled: Successfully assigned services-7911/nodeport-update-service-866r2 to latest-worker Mar 21 23:42:57.102: INFO: At 2021-03-21 23:40:43 +0000 UTC - event for nodeport-update-service-7gzbm: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:42:57.102: INFO: At 2021-03-21 23:40:43 +0000 UTC - event for nodeport-update-service-866r2: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:42:57.102: INFO: At 2021-03-21 23:40:44 +0000 UTC - event for nodeport-update-service-7gzbm: {kubelet latest-worker2} Created: Created container nodeport-update-service Mar 21 23:42:57.102: INFO: At 2021-03-21 23:40:44 +0000 UTC - event for nodeport-update-service-866r2: {kubelet latest-worker} Created: Created container nodeport-update-service Mar 21 23:42:57.102: INFO: At 2021-03-21 23:40:44 +0000 UTC - event for nodeport-update-service-866r2: {kubelet latest-worker} Started: Started container nodeport-update-service Mar 21 23:42:57.102: INFO: At 2021-03-21 23:40:45 +0000 UTC - event for nodeport-update-service-7gzbm: {kubelet latest-worker2} Started: Started container nodeport-update-service Mar 21 23:42:57.102: INFO: At 2021-03-21 23:40:50 +0000 UTC - event for execpoddz5nl: {default-scheduler } Scheduled: Successfully assigned services-7911/execpoddz5nl to latest-worker2 Mar 21 23:42:57.102: INFO: At 2021-03-21 23:40:51 +0000 UTC - event for execpoddz5nl: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:42:57.102: INFO: At 2021-03-21 23:40:53 +0000 UTC - event for execpoddz5nl: {kubelet latest-worker2} Created: Created container agnhost-container Mar 21 23:42:57.102: INFO: At 2021-03-21 23:40:54 +0000 UTC - event for execpoddz5nl: {kubelet latest-worker2} Started: Started container agnhost-container Mar 21 23:42:57.137: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:42:57.137: INFO: execpoddz5nl latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:40:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:40:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:40:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:40:50 +0000 UTC }] Mar 21 23:42:57.137: INFO: nodeport-update-service-7gzbm latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:40:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:40:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:40:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:40:41 +0000 UTC }] Mar 21 23:42:57.137: INFO: nodeport-update-service-866r2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:40:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:40:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:40:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:40:41 +0000 UTC }] Mar 21 23:42:57.137: INFO: Mar 21 23:42:57.234: INFO: Logging node info for node latest-control-plane Mar 21 23:42:57.278: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6944020 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:39:30 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:39:30 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:39:30 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:39:30 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:42:57.278: INFO: Logging kubelet events for node latest-control-plane Mar 21 23:42:57.329: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 21 23:42:57.395: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 21 23:42:57.395: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 21 23:42:57.395: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:42:57.395: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 21 23:42:57.395: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:42:57.395: INFO: Container kube-scheduler ready: true, restart count 0 Mar 21 23:42:57.395: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:42:57.395: INFO: Container kube-apiserver ready: true, restart count 0 Mar 21 23:42:57.395: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:42:57.395: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:42:57.395: INFO: coredns-74ff55c5b-xcknl started at 2021-03-21 23:31:55 +0000 UTC (0+1 container statuses recorded) Mar 21 23:42:57.395: INFO: Container coredns ready: true, restart count 0 Mar 21 23:42:57.395: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:42:57.395: INFO: Container etcd ready: true, restart count 0 Mar 21 23:42:57.395: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:42:57.395: INFO: Container kube-proxy ready: true, restart count 0 W0321 23:42:57.451010 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:42:58.121: INFO: Latency metrics for node latest-control-plane Mar 21 23:42:58.121: INFO: Logging node info for node latest-worker Mar 21 23:42:58.295: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6941925 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 19:07:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:23:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:38:10 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:38:10 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:38:10 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:38:10 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:42:58.295: INFO: Logging kubelet events for node latest-worker Mar 21 23:42:58.642: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 21 23:42:58.843: INFO: chaos-controller-manager-69c479c674-7xglh started at 2021-03-21 23:27:10 +0000 UTC (0+1 container statuses recorded) Mar 21 23:42:58.843: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:42:58.843: INFO: pod-secrets-ab840055-9f2e-487e-8321-da1eac0819f7 started at (0+0 container statuses recorded) Mar 21 23:42:58.843: INFO: pod-submit-status-2-13 started at (0+0 container statuses recorded) Mar 21 23:42:58.843: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:42:58.843: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:42:58.843: INFO: pod-submit-status-1-11 started at 2021-03-21 23:42:55 +0000 UTC (0+1 container statuses recorded) Mar 21 23:42:58.843: INFO: Container busybox ready: false, restart count 0 Mar 21 23:42:58.843: INFO: nodeport-update-service-866r2 started at 2021-03-21 23:40:41 +0000 UTC (0+1 container statuses recorded) Mar 21 23:42:58.843: INFO: Container nodeport-update-service ready: true, restart count 0 Mar 21 23:42:58.843: INFO: pod-submit-status-0-12 started at 2021-03-21 23:42:55 +0000 UTC (0+1 container statuses recorded) Mar 21 23:42:58.844: INFO: Container busybox ready: false, restart count 0 Mar 21 23:42:58.844: INFO: chaos-daemon-qkndt started at 2021-03-21 18:05:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:42:58.844: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:42:58.844: INFO: kindnet-sbskd started at 2021-03-21 18:05:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:42:58.844: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:42:58.844: INFO: server-envvars-f38b1502-1949-4f9f-a626-622eca4b024a started at 2021-03-21 23:42:45 +0000 UTC (0+1 container statuses recorded) Mar 21 23:42:58.844: INFO: Container srv ready: true, restart count 0 W0321 23:42:59.033470 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:42:59.480: INFO: Latency metrics for node latest-worker Mar 21 23:42:59.480: INFO: Logging node info for node latest-worker2 Mar 21 23:43:00.019: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6948512 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5776":"csi-mock-csi-mock-volumes-5776","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:37:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:37:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:42:50 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:42:50 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:42:50 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:42:50 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:43:00.020: INFO: Logging kubelet events for node latest-worker2 Mar 21 23:43:00.951: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 21 23:43:01.205: INFO: chaos-daemon-wl4fl started at 2021-03-21 23:31:45 +0000 UTC (0+1 container statuses recorded) Mar 21 23:43:01.205: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:43:01.205: INFO: kindnet-vhlbm started at 2021-03-21 23:31:45 +0000 UTC (0+1 container statuses recorded) Mar 21 23:43:01.205: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:43:01.205: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:43:01.205: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:43:01.205: INFO: coredns-74ff55c5b-7tkvj started at 2021-03-21 23:31:52 +0000 UTC (0+1 container statuses recorded) Mar 21 23:43:01.205: INFO: Container coredns ready: true, restart count 0 Mar 21 23:43:01.205: INFO: nodeport-update-service-7gzbm started at 2021-03-21 23:40:41 +0000 UTC (0+1 container statuses recorded) Mar 21 23:43:01.205: INFO: Container nodeport-update-service ready: true, restart count 0 Mar 21 23:43:01.205: INFO: execpoddz5nl started at 2021-03-21 23:40:50 +0000 UTC (0+1 container statuses recorded) Mar 21 23:43:01.205: INFO: Container agnhost-container ready: true, restart count 0 Mar 21 23:43:01.205: INFO: csi-mockplugin-0 started at 2021-03-21 23:41:00 +0000 UTC (0+3 container statuses recorded) Mar 21 23:43:01.205: INFO: Container csi-provisioner ready: true, restart count 0 Mar 21 23:43:01.205: INFO: Container driver-registrar ready: true, restart count 0 Mar 21 23:43:01.205: INFO: Container mock ready: true, restart count 0 Mar 21 23:43:01.205: INFO: pvc-volume-tester-j8xwq started at 2021-03-21 23:41:18 +0000 UTC (0+1 container statuses recorded) Mar 21 23:43:01.205: INFO: Container volume-tester ready: false, restart count 0 W0321 23:43:01.363879 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:43:01.754: INFO: Latency metrics for node latest-worker2 Mar 21 23:43:01.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7911" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [141.675 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to update service type to NodePort listening on same port number but different protocols [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211 Mar 21 23:42:56.320: Unexpected error: <*errors.errorString | 0xc002128020>: { s: "no subset of available IP address found for the endpoint nodeport-update-service within timeout 2m0s", } no subset of available IP address found for the endpoint nodeport-update-service within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":54,"completed":15,"skipped":2377,"failed":5,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Services should update endpoints: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:43:02.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename nettest STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83 STEP: Executing a successful http request from the external internet [It] should update endpoints: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351 STEP: Performing setup for networking test in namespace nettest-5747 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 21 23:43:03.387: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 21 23:43:04.288: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:43:06.791: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:43:08.425: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:43:10.415: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:43:12.595: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:43:14.386: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:43:16.666: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:43:18.297: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 21 23:43:20.505: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 21 23:43:20.680: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 21 23:43:29.120: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 STEP: Getting node addresses Mar 21 23:43:29.120: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating the service on top of the pods in kubernetes Mar 21 23:43:29.546: INFO: Service node-port-service in namespace nettest-5747 found. Mar 21 23:43:29.851: INFO: Service session-affinity-service in namespace nettest-5747 found. STEP: Waiting for NodePort service to expose endpoint Mar 21 23:43:30.935: INFO: Waiting for amount of service:node-port-service endpoints to be 2 STEP: Waiting for Session Affinity service to expose endpoint Mar 21 23:43:31.966: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 STEP: dialing(udp) test-container-pod --> 10.96.78.217:90 (config.clusterIP) Mar 21 23:43:32.040: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:43:32.040: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:43:32.325: INFO: Waiting for responses: map[netserver-0:{}] Mar 21 23:43:34.338: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:43:34.338: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:43:34.464: INFO: Waiting for responses: map[netserver-0:{}] Mar 21 23:43:36.495: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:43:36.495: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:43:36.818: INFO: Waiting for responses: map[netserver-0:{}] Mar 21 23:43:38.915: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:43:38.915: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:43:39.138: INFO: Waiting for responses: map[] Mar 21 23:43:39.138: INFO: reached 10.96.78.217 after 3/34 tries STEP: Deleting a pod which, will be replaced with a new endpoint Mar 21 23:43:39.363: INFO: Waiting for pod netserver-0 to disappear Mar 21 23:43:39.644: INFO: Pod netserver-0 no longer exists Mar 21 23:43:40.644: INFO: Waiting for amount of service:node-port-service endpoints to be 1 STEP: dialing(udp) test-container-pod --> 10.96.78.217:90 (config.clusterIP) (endpoint recovery) Mar 21 23:43:45.857: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:43:45.857: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:43:46.020: INFO: Waiting for responses: map[] Mar 21 23:43:48.169: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:43:48.169: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:43:48.462: INFO: Waiting for responses: map[] Mar 21 23:43:50.507: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:43:50.507: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:43:50.696: INFO: Waiting for responses: map[] Mar 21 23:43:52.771: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:43:52.771: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:43:53.238: INFO: Waiting for responses: map[] Mar 21 23:43:55.252: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:43:55.252: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:43:55.458: INFO: Waiting for responses: map[] Mar 21 23:43:57.558: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:43:57.558: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:43:57.754: INFO: Waiting for responses: map[] Mar 21 23:43:59.809: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:43:59.809: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:00.206: INFO: Waiting for responses: map[] Mar 21 23:44:02.223: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:02.223: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:02.402: INFO: Waiting for responses: map[] Mar 21 23:44:04.421: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:04.421: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:04.718: INFO: Waiting for responses: map[] Mar 21 23:44:06.774: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:06.774: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:06.931: INFO: Waiting for responses: map[] Mar 21 23:44:08.965: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:08.965: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:09.101: INFO: Waiting for responses: map[] Mar 21 23:44:11.134: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:11.134: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:11.433: INFO: Waiting for responses: map[] Mar 21 23:44:13.553: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:13.553: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:13.712: INFO: Waiting for responses: map[] Mar 21 23:44:15.721: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:15.721: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:15.850: INFO: Waiting for responses: map[] Mar 21 23:44:17.880: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:17.881: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:18.398: INFO: Waiting for responses: map[] Mar 21 23:44:20.512: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:20.512: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:20.702: INFO: Waiting for responses: map[] Mar 21 23:44:22.757: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:22.757: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:22.959: INFO: Waiting for responses: map[] Mar 21 23:44:25.013: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:25.013: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:25.182: INFO: Waiting for responses: map[] Mar 21 23:44:27.202: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:27.202: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:28.046: INFO: Waiting for responses: map[] Mar 21 23:44:30.060: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:30.060: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:30.243: INFO: Waiting for responses: map[] Mar 21 23:44:32.277: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:32.277: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:32.419: INFO: Waiting for responses: map[] Mar 21 23:44:34.583: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:34.583: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:34.930: INFO: Waiting for responses: map[] Mar 21 23:44:36.944: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:36.944: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:37.128: INFO: Waiting for responses: map[] Mar 21 23:44:39.136: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:39.136: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:39.327: INFO: Waiting for responses: map[] Mar 21 23:44:41.379: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:41.379: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:41.672: INFO: Waiting for responses: map[] Mar 21 23:44:43.864: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:43.864: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:44.244: INFO: Waiting for responses: map[] Mar 21 23:44:46.284: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:46.284: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:46.474: INFO: Waiting for responses: map[] Mar 21 23:44:48.498: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:48.499: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:48.612: INFO: Waiting for responses: map[] Mar 21 23:44:50.845: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:50.845: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:51.442: INFO: Waiting for responses: map[] Mar 21 23:44:53.769: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:53.769: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:54.217: INFO: Waiting for responses: map[] Mar 21 23:44:56.606: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:56.606: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:57.147: INFO: Waiting for responses: map[] Mar 21 23:44:59.254: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:44:59.255: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:44:59.980: INFO: Waiting for responses: map[] Mar 21 23:45:02.033: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:45:02.033: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:45:03.029: INFO: Waiting for responses: map[] Mar 21 23:45:05.140: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:9080/dial?request=hostname&protocol=udp&host=10.96.78.217&port=90&tries=1'] Namespace:nettest-5747 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:45:05.140: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:45:05.907: INFO: Waiting for responses: map[] Mar 21 23:45:05.907: INFO: reached 10.96.78.217 after 33/34 tries [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:45:05.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "nettest-5747" for this suite. • [SLOW TEST:124.409 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 Granular Checks: Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151 should update endpoints: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: udp","total":54,"completed":16,"skipped":2532,"failed":5,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should implement service.kubernetes.io/headless /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:45:06.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should implement service.kubernetes.io/headless /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916 STEP: creating service-headless in namespace services-7314 STEP: creating service service-headless in namespace services-7314 STEP: creating replication controller service-headless in namespace services-7314 I0321 23:45:07.812004 7 runners.go:190] Created replication controller with name: service-headless, namespace: services-7314, replica count: 3 I0321 23:45:10.862847 7 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 23:45:13.863446 7 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 23:45:16.864318 7 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: creating service in namespace services-7314 STEP: creating service service-headless-toggled in namespace services-7314 STEP: creating replication controller service-headless-toggled in namespace services-7314 I0321 23:45:17.485699 7 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-7314, replica count: 3 I0321 23:45:20.537043 7 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 23:45:23.538606 7 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0321 23:45:26.539657 7 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: verifying service is up Mar 21 23:45:26.549: INFO: Creating new host exec pod Mar 21 23:45:26.631: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:45:28.715: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:45:30.670: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Mar 21 23:45:30.670: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Mar 21 23:45:36.763: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.6.224:80 2>&1 || true; echo; done" in pod services-7314/verify-service-up-host-exec-pod Mar 21 23:45:36.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-7314 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.6.224:80 2>&1 || true; echo; done' Mar 21 23:45:37.321: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n" Mar 21 23:45:37.321: INFO: stdout: "service-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\n" Mar 21 23:45:37.322: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.6.224:80 2>&1 || true; echo; done" in pod services-7314/verify-service-up-exec-pod-d2jxl Mar 21 23:45:37.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-7314 exec verify-service-up-exec-pod-d2jxl -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.6.224:80 2>&1 || true; echo; done' Mar 21 23:45:37.936: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n" Mar 21 23:45:37.936: INFO: stdout: "service-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7314 STEP: Deleting pod verify-service-up-exec-pod-d2jxl in namespace services-7314 STEP: verifying service-headless is not up Mar 21 23:45:38.476: INFO: Creating new host exec pod Mar 21 23:45:38.799: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:45:41.091: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:45:42.829: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:45:44.895: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Mar 21 23:45:44.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-7314 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.168.115:80 && echo service-down-failed' Mar 21 23:45:47.146: INFO: rc: 28 Mar 21 23:45:47.147: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.168.115:80 && echo service-down-failed" in pod services-7314/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-7314 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.168.115:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.96.168.115:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-7314 STEP: adding service.kubernetes.io/headless label STEP: verifying service is not up Mar 21 23:45:47.422: INFO: Creating new host exec pod Mar 21 23:45:47.895: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:45:49.913: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:45:51.930: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true) Mar 21 23:45:51.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-7314 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.6.224:80 && echo service-down-failed' Mar 21 23:45:54.207: INFO: rc: 28 Mar 21 23:45:54.207: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.96.6.224:80 && echo service-down-failed" in pod services-7314/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-7314 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.96.6.224:80 && echo service-down-failed: Command stdout: stderr: + curl -g -s --connect-timeout 2 http://10.96.6.224:80 command terminated with exit code 28 error: exit status 28 Output: STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-7314 STEP: removing service.kubernetes.io/headless annotation STEP: verifying service is up Mar 21 23:45:55.338: INFO: Creating new host exec pod Mar 21 23:45:55.481: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:45:57.577: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:45:59.507: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true) Mar 21 23:45:59.507: INFO: Creating new exec pod STEP: verifying service has 3 reachable backends Mar 21 23:46:05.935: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.6.224:80 2>&1 || true; echo; done" in pod services-7314/verify-service-up-host-exec-pod Mar 21 23:46:05.935: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-7314 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.6.224:80 2>&1 || true; echo; done' Mar 21 23:46:06.503: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n" Mar 21 23:46:06.503: INFO: stdout: "service-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\n" Mar 21 23:46:06.504: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.6.224:80 2>&1 || true; echo; done" in pod services-7314/verify-service-up-exec-pod-9p7hf Mar 21 23:46:06.504: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-7314 exec verify-service-up-exec-pod-9p7hf -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.6.224:80 2>&1 || true; echo; done' Mar 21 23:46:07.146: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n+ wget -q -T 1 -O - http://10.96.6.224:80\n+ echo\n" Mar 21 23:46:07.146: INFO: stdout: "service-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-tsw7v\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\nservice-headless-toggled-wdm65\nservice-headless-toggled-tsw7v\nservice-headless-toggled-wdm65\nservice-headless-toggled-9bxxp\nservice-headless-toggled-9bxxp\n" STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7314 STEP: Deleting pod verify-service-up-exec-pod-9p7hf in namespace services-7314 STEP: verifying service-headless is still not up Mar 21 23:46:07.938: INFO: Creating new host exec pod Mar 21 23:46:08.004: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:46:10.207: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:46:12.340: FAIL: Unexpected error: <*errors.StatusError | 0xc001f581e0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "pods \"verify-service-down-host-exec-pod\" not found", Reason: "NotFound", Details: { Name: "verify-service-down-host-exec-pod", Group: "", Kind: "pods", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } pods "verify-service-down-host-exec-pod" not found occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.launchHostExecPod(0x73e8b88, 0xc005d3edc0, 0xc0034394d0, 0xd, 0x6be938f, 0x21, 0xc0001120c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2679 +0x259 k8s.io/kubernetes/test/e2e/network.verifyServeHostnameServiceDown(0x73e8b88, 0xc005d3edc0, 0xc0034394d0, 0xd, 0xc0044f38f0, 0xd, 0x50, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:383 +0xaa k8s.io/kubernetes/test/e2e/network.glob..func24.29() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1965 +0x9e5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00386cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00386cd80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00386cd80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-7314". STEP: Found 80 events. Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:08 +0000 UTC - event for service-headless: {replication-controller } SuccessfulCreate: Created pod: service-headless-g4sgx Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:08 +0000 UTC - event for service-headless: {replication-controller } SuccessfulCreate: Created pod: service-headless-fm7x8 Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:08 +0000 UTC - event for service-headless: {replication-controller } SuccessfulCreate: Created pod: service-headless-x7j2q Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:08 +0000 UTC - event for service-headless-fm7x8: {default-scheduler } Scheduled: Successfully assigned services-7314/service-headless-fm7x8 to latest-worker2 Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:08 +0000 UTC - event for service-headless-g4sgx: {default-scheduler } Scheduled: Successfully assigned services-7314/service-headless-g4sgx to latest-worker Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:08 +0000 UTC - event for service-headless-x7j2q: {default-scheduler } Scheduled: Successfully assigned services-7314/service-headless-x7j2q to latest-worker2 Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:09 +0000 UTC - event for service-headless-fm7x8: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:10 +0000 UTC - event for service-headless-g4sgx: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:11 +0000 UTC - event for service-headless-x7j2q: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:12 +0000 UTC - event for service-headless-fm7x8: {kubelet latest-worker2} Created: Created container service-headless Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:13 +0000 UTC - event for service-headless-fm7x8: {kubelet latest-worker2} Started: Started container service-headless Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:13 +0000 UTC - event for service-headless-g4sgx: {kubelet latest-worker} Created: Created container service-headless Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:13 +0000 UTC - event for service-headless-g4sgx: {kubelet latest-worker} Started: Started container service-headless Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:13 +0000 UTC - event for service-headless-x7j2q: {kubelet latest-worker2} Created: Created container service-headless Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:14 +0000 UTC - event for service-headless-x7j2q: {kubelet latest-worker2} Started: Started container service-headless Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:17 +0000 UTC - event for service-headless-toggled: {replication-controller } SuccessfulCreate: Created pod: service-headless-toggled-tsw7v Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:17 +0000 UTC - event for service-headless-toggled: {replication-controller } SuccessfulCreate: Created pod: service-headless-toggled-9bxxp Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:17 +0000 UTC - event for service-headless-toggled: {replication-controller } SuccessfulCreate: Created pod: service-headless-toggled-wdm65 Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:17 +0000 UTC - event for service-headless-toggled-9bxxp: {default-scheduler } Scheduled: Successfully assigned services-7314/service-headless-toggled-9bxxp to latest-worker2 Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:17 +0000 UTC - event for service-headless-toggled-tsw7v: {default-scheduler } Scheduled: Successfully assigned services-7314/service-headless-toggled-tsw7v to latest-worker2 Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:17 +0000 UTC - event for service-headless-toggled-wdm65: {default-scheduler } Scheduled: Successfully assigned services-7314/service-headless-toggled-wdm65 to latest-worker Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:19 +0000 UTC - event for service-headless-toggled-tsw7v: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:19 +0000 UTC - event for service-headless-toggled-wdm65: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:21 +0000 UTC - event for service-headless-toggled-9bxxp: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:22 +0000 UTC - event for service-headless-toggled-tsw7v: {kubelet latest-worker2} Created: Created container service-headless-toggled Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:22 +0000 UTC - event for service-headless-toggled-wdm65: {kubelet latest-worker} Created: Created container service-headless-toggled Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:23 +0000 UTC - event for service-headless-toggled-9bxxp: {kubelet latest-worker2} Started: Started container service-headless-toggled Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:23 +0000 UTC - event for service-headless-toggled-9bxxp: {kubelet latest-worker2} Created: Created container service-headless-toggled Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:23 +0000 UTC - event for service-headless-toggled-tsw7v: {kubelet latest-worker2} Started: Started container service-headless-toggled Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:23 +0000 UTC - event for service-headless-toggled-wdm65: {kubelet latest-worker} Started: Started container service-headless-toggled Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:26 +0000 UTC - event for verify-service-up-host-exec-pod: {default-scheduler } Scheduled: Successfully assigned services-7314/verify-service-up-host-exec-pod to latest-worker2 Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:27 +0000 UTC - event for verify-service-up-host-exec-pod: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:28 +0000 UTC - event for verify-service-up-host-exec-pod: {kubelet latest-worker2} Created: Created container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:29 +0000 UTC - event for verify-service-up-host-exec-pod: {kubelet latest-worker2} Started: Started container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:30 +0000 UTC - event for verify-service-up-exec-pod-d2jxl: {default-scheduler } Scheduled: Successfully assigned services-7314/verify-service-up-exec-pod-d2jxl to latest-worker2 Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:32 +0000 UTC - event for verify-service-up-exec-pod-d2jxl: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:33 +0000 UTC - event for verify-service-up-exec-pod-d2jxl: {kubelet latest-worker2} Created: Created container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:34 +0000 UTC - event for verify-service-up-exec-pod-d2jxl: {kubelet latest-worker2} Started: Started container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:38 +0000 UTC - event for verify-service-down-host-exec-pod: {default-scheduler } Scheduled: Successfully assigned services-7314/verify-service-down-host-exec-pod to latest-worker2 Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:38 +0000 UTC - event for verify-service-up-exec-pod-d2jxl: {kubelet latest-worker2} Killing: Stopping container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:38 +0000 UTC - event for verify-service-up-host-exec-pod: {kubelet latest-worker2} Killing: Stopping container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:40 +0000 UTC - event for verify-service-down-host-exec-pod: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:42 +0000 UTC - event for verify-service-down-host-exec-pod: {kubelet latest-worker2} Started: Started container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:42 +0000 UTC - event for verify-service-down-host-exec-pod: {kubelet latest-worker2} Created: Created container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:47 +0000 UTC - event for verify-service-down-host-exec-pod: {default-scheduler } Scheduled: Successfully assigned services-7314/verify-service-down-host-exec-pod to latest-worker2 Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:47 +0000 UTC - event for verify-service-down-host-exec-pod: {kubelet latest-worker2} Killing: Stopping container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:49 +0000 UTC - event for verify-service-down-host-exec-pod: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:50 +0000 UTC - event for verify-service-down-host-exec-pod: {kubelet latest-worker2} Started: Started container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:50 +0000 UTC - event for verify-service-down-host-exec-pod: {kubelet latest-worker2} Created: Created container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:54 +0000 UTC - event for verify-service-down-host-exec-pod: {kubelet latest-worker2} Killing: Stopping container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:55 +0000 UTC - event for verify-service-up-host-exec-pod: {default-scheduler } Scheduled: Successfully assigned services-7314/verify-service-up-host-exec-pod to latest-worker2 Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:56 +0000 UTC - event for verify-service-up-host-exec-pod: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:57 +0000 UTC - event for verify-service-up-host-exec-pod: {kubelet latest-worker2} Started: Started container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:57 +0000 UTC - event for verify-service-up-host-exec-pod: {kubelet latest-worker2} Created: Created container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:45:59 +0000 UTC - event for verify-service-up-exec-pod-9p7hf: {default-scheduler } Scheduled: Successfully assigned services-7314/verify-service-up-exec-pod-9p7hf to latest-worker2 Mar 21 23:46:12.768: INFO: At 2021-03-21 23:46:01 +0000 UTC - event for verify-service-up-exec-pod-9p7hf: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:46:12.768: INFO: At 2021-03-21 23:46:02 +0000 UTC - event for verify-service-up-exec-pod-9p7hf: {kubelet latest-worker2} Created: Created container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:46:03 +0000 UTC - event for verify-service-up-exec-pod-9p7hf: {kubelet latest-worker2} Started: Started container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:46:07 +0000 UTC - event for verify-service-down-host-exec-pod: {default-scheduler } Scheduled: Successfully assigned services-7314/verify-service-down-host-exec-pod to latest-worker2 Mar 21 23:46:12.768: INFO: At 2021-03-21 23:46:07 +0000 UTC - event for verify-service-up-exec-pod-9p7hf: {kubelet latest-worker2} Killing: Stopping container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:46:07 +0000 UTC - event for verify-service-up-host-exec-pod: {kubelet latest-worker2} Killing: Stopping container agnhost-container Mar 21 23:46:12.768: INFO: At 2021-03-21 23:46:09 +0000 UTC - event for verify-service-down-host-exec-pod: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:10 +0000 UTC - event for service-headless-fm7x8: {kubelet latest-worker2} Killing: Stopping container service-headless Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:10 +0000 UTC - event for service-headless-fm7x8: {taint-controller } TaintManagerEviction: Marking for deletion Pod services-7314/service-headless-fm7x8 Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:10 +0000 UTC - event for service-headless-toggled-9bxxp: {kubelet latest-worker2} Killing: Stopping container service-headless-toggled Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:10 +0000 UTC - event for service-headless-toggled-9bxxp: {taint-controller } TaintManagerEviction: Marking for deletion Pod services-7314/service-headless-toggled-9bxxp Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:10 +0000 UTC - event for service-headless-toggled-tsw7v: {taint-controller } TaintManagerEviction: Marking for deletion Pod services-7314/service-headless-toggled-tsw7v Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:10 +0000 UTC - event for service-headless-toggled-tsw7v: {kubelet latest-worker2} Killing: Stopping container service-headless-toggled Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:10 +0000 UTC - event for service-headless-x7j2q: {taint-controller } TaintManagerEviction: Marking for deletion Pod services-7314/service-headless-x7j2q Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:10 +0000 UTC - event for service-headless-x7j2q: {kubelet latest-worker2} Killing: Stopping container service-headless Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:10 +0000 UTC - event for verify-service-down-host-exec-pod: {taint-controller } TaintManagerEviction: Marking for deletion Pod services-7314/verify-service-down-host-exec-pod Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:10 +0000 UTC - event for verify-service-down-host-exec-pod: {kubelet latest-worker2} Created: Created container agnhost-container Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:11 +0000 UTC - event for service-headless: {replication-controller } SuccessfulCreate: Created pod: service-headless-7pg8j Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:11 +0000 UTC - event for service-headless: {replication-controller } SuccessfulCreate: Created pod: service-headless-4cvnl Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:11 +0000 UTC - event for service-headless-7pg8j: {default-scheduler } Scheduled: Successfully assigned services-7314/service-headless-7pg8j to latest-worker Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:11 +0000 UTC - event for service-headless-toggled: {replication-controller } SuccessfulCreate: Created pod: service-headless-toggled-bgfcm Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:11 +0000 UTC - event for service-headless-toggled: {replication-controller } SuccessfulCreate: Created pod: service-headless-toggled-x8t9x Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:11 +0000 UTC - event for service-headless-toggled-x8t9x: {default-scheduler } Scheduled: Successfully assigned services-7314/service-headless-toggled-x8t9x to latest-worker Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:11 +0000 UTC - event for verify-service-down-host-exec-pod: {kubelet latest-worker2} Started: Started container agnhost-container Mar 21 23:46:12.769: INFO: At 2021-03-21 23:46:12 +0000 UTC - event for service-headless-toggled-bgfcm: {default-scheduler } Scheduled: Successfully assigned services-7314/service-headless-toggled-bgfcm to latest-worker Mar 21 23:46:13.260: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:46:13.260: INFO: service-headless-4cvnl latest-worker Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:46:11 +0000 UTC }] Mar 21 23:46:13.260: INFO: service-headless-7pg8j latest-worker Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:46:11 +0000 UTC }] Mar 21 23:46:13.260: INFO: service-headless-fm7x8 latest-worker2 Running 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:08 +0000 UTC }] Mar 21 23:46:13.260: INFO: service-headless-g4sgx latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:08 +0000 UTC }] Mar 21 23:46:13.261: INFO: service-headless-toggled-9bxxp latest-worker2 Running 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:17 +0000 UTC }] Mar 21 23:46:13.261: INFO: service-headless-toggled-bgfcm latest-worker Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:46:11 +0000 UTC }] Mar 21 23:46:13.261: INFO: service-headless-toggled-tsw7v latest-worker2 Running 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:17 +0000 UTC }] Mar 21 23:46:13.261: INFO: service-headless-toggled-wdm65 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:17 +0000 UTC }] Mar 21 23:46:13.261: INFO: service-headless-toggled-x8t9x latest-worker Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:46:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:46:11 +0000 UTC ContainersNotReady containers with unready status: [service-headless-toggled]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:46:11 +0000 UTC ContainersNotReady containers with unready status: [service-headless-toggled]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:46:11 +0000 UTC }] Mar 21 23:46:13.261: INFO: service-headless-x7j2q latest-worker2 Running 1s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:45:08 +0000 UTC }] Mar 21 23:46:13.261: INFO: Mar 21 23:46:14.214: INFO: Logging node info for node latest-control-plane Mar 21 23:46:15.439: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6950852 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:44:31 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:44:31 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:44:31 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:44:31 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:46:15.439: INFO: Logging kubelet events for node latest-control-plane Mar 21 23:46:16.167: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 21 23:46:16.798: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:16.798: INFO: Container etcd ready: true, restart count 0 Mar 21 23:46:16.798: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:16.798: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:46:16.798: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:16.798: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 21 23:46:16.798: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:16.798: INFO: Container kube-scheduler ready: true, restart count 0 Mar 21 23:46:16.798: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:16.798: INFO: Container kube-apiserver ready: true, restart count 0 Mar 21 23:46:16.798: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:16.798: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:46:16.798: INFO: coredns-74ff55c5b-xcknl started at 2021-03-21 23:31:55 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:16.798: INFO: Container coredns ready: true, restart count 0 Mar 21 23:46:16.798: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:16.798: INFO: Container local-path-provisioner ready: true, restart count 0 W0321 23:46:16.963703 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:46:17.543: INFO: Latency metrics for node latest-control-plane Mar 21 23:46:17.543: INFO: Logging node info for node latest-worker Mar 21 23:46:17.692: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6953018 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9555":"csi-mock-csi-mock-volumes-9555","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:45:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:45:51 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:45:51 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:45:51 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:45:51 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:46:17.693: INFO: Logging kubelet events for node latest-worker Mar 21 23:46:18.048: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 21 23:46:18.298: INFO: service-headless-toggled-bgfcm started at 2021-03-21 23:46:12 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:18.298: INFO: Container service-headless-toggled ready: false, restart count 0 Mar 21 23:46:18.298: INFO: service-headless-toggled-wdm65 started at 2021-03-21 23:45:17 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:18.298: INFO: Container service-headless-toggled ready: true, restart count 0 Mar 21 23:46:18.298: INFO: service-headless-7pg8j started at 2021-03-21 23:46:11 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:18.298: INFO: Container service-headless ready: false, restart count 0 Mar 21 23:46:18.298: INFO: busybox-4f04a6b9-4415-45de-8587-75ac655ba9a4 started at 2021-03-21 23:44:13 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:18.298: INFO: Container busybox ready: true, restart count 0 Mar 21 23:46:18.298: INFO: chaos-controller-manager-69c479c674-7xglh started at 2021-03-21 23:27:10 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:18.298: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:46:18.298: INFO: service-headless-4cvnl started at 2021-03-21 23:46:12 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:18.298: INFO: Container service-headless ready: false, restart count 0 Mar 21 23:46:18.298: INFO: rally-27f2308d-wdocmhvr-1 started at 2021-03-21 23:46:02 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:18.298: INFO: Container rally-27f2308d-wdocmhvr ready: false, restart count 0 Mar 21 23:46:18.298: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:18.298: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:46:18.298: INFO: coredns-74ff55c5b-55hwc started at 2021-03-21 23:46:11 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:18.298: INFO: Container coredns ready: false, restart count 0 Mar 21 23:46:18.298: INFO: service-headless-g4sgx started at 2021-03-21 23:45:08 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:18.298: INFO: Container service-headless ready: true, restart count 0 Mar 21 23:46:18.298: INFO: csi-mockplugin-attacher-0 started at 2021-03-21 23:45:00 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:18.298: INFO: Container csi-attacher ready: true, restart count 0 Mar 21 23:46:18.298: INFO: chaos-daemon-qkndt started at 2021-03-21 18:05:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:18.298: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:46:18.298: INFO: kindnet-sbskd started at 2021-03-21 18:05:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:18.298: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:46:18.298: INFO: csi-mockplugin-0 started at 2021-03-21 23:45:00 +0000 UTC (0+3 container statuses recorded) Mar 21 23:46:18.298: INFO: Container csi-provisioner ready: true, restart count 0 Mar 21 23:46:18.298: INFO: Container driver-registrar ready: true, restart count 0 Mar 21 23:46:18.298: INFO: Container mock ready: true, restart count 0 Mar 21 23:46:18.298: INFO: service-headless-toggled-x8t9x started at 2021-03-21 23:46:11 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:18.298: INFO: Container service-headless-toggled ready: false, restart count 0 W0321 23:46:19.105467 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:46:20.248: INFO: Latency metrics for node latest-worker Mar 21 23:46:20.248: INFO: Logging node info for node latest-worker2 Mar 21 23:46:21.136: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6953435 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:37:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:37:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-21 23:46:10 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-21 23:46:10 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:42:50 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:42:50 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:42:50 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:42:50 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:46:21.138: INFO: Logging kubelet events for node latest-worker2 Mar 21 23:46:21.395: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 21 23:46:21.683: INFO: service-headless-toggled-tsw7v started at 2021-03-21 23:45:17 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:21.683: INFO: Container service-headless-toggled ready: true, restart count 0 Mar 21 23:46:21.683: INFO: taint-eviction-2 started at 2021-03-21 23:46:10 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:21.683: INFO: Container pause ready: true, restart count 0 Mar 21 23:46:21.683: INFO: chaos-daemon-wl4fl started at 2021-03-21 23:31:45 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:21.683: INFO: Container chaos-daemon ready: false, restart count 0 Mar 21 23:46:21.683: INFO: kindnet-vhlbm started at 2021-03-21 23:31:45 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:21.683: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:46:21.683: INFO: rally-27f2308d-wdocmhvr-0 started at 2021-03-21 23:45:57 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:21.683: INFO: Container rally-27f2308d-wdocmhvr ready: false, restart count 0 Mar 21 23:46:21.683: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:21.683: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:46:21.683: INFO: coredns-74ff55c5b-7tkvj started at 2021-03-21 23:31:52 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:21.683: INFO: Container coredns ready: true, restart count 0 Mar 21 23:46:21.683: INFO: service-headless-toggled-9bxxp started at 2021-03-21 23:45:17 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:21.683: INFO: Container service-headless-toggled ready: true, restart count 0 Mar 21 23:46:21.683: INFO: service-headless-x7j2q started at 2021-03-21 23:45:08 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:21.683: INFO: Container service-headless ready: true, restart count 0 Mar 21 23:46:21.683: INFO: service-headless-fm7x8 started at 2021-03-21 23:45:08 +0000 UTC (0+1 container statuses recorded) Mar 21 23:46:21.683: INFO: Container service-headless ready: true, restart count 0 W0321 23:46:22.277858 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:46:22.904: INFO: Latency metrics for node latest-worker2 Mar 21 23:46:22.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7314" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [77.135 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should implement service.kubernetes.io/headless [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916 Mar 21 23:46:12.340: Unexpected error: <*errors.StatusError | 0xc001f581e0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "pods \"verify-service-down-host-exec-pod\" not found", Reason: "NotFound", Details: { Name: "verify-service-down-host-exec-pod", Group: "", Kind: "pods", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } pods "verify-service-down-host-exec-pod" not found occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2679 ------------------------------ {"msg":"FAILED [sig-network] Services should implement service.kubernetes.io/headless","total":54,"completed":16,"skipped":2609,"failed":6,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:46:23.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85 Mar 21 23:46:25.474: INFO: (0) /api/v1/nodes/latest-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for multiple endpoint-Services with same selector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289
STEP: Performing setup for networking test in namespace nettest-6683
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 21 23:46:27.379: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 21 23:46:27.382: INFO: Unschedulable nodes= 1, maximum value for starting tests= 0
Mar 21 23:46:27.382: INFO: 	-> Node latest-worker2 [[[ Ready=true, Network(available)=false, Taints=[{kubernetes.io/e2e-evict-taint-key evictTaintVal NoExecute 2021-03-21 23:46:10 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/master ]]]
Mar 21 23:46:27.382: INFO: ==== node wait: 2 out of 3 nodes are ready, max notReady allowed 0.  Need 1 more before starting.
Mar 21 23:46:57.384: INFO: Unschedulable nodes= 1, maximum value for starting tests= 0
Mar 21 23:46:57.384: INFO: 	-> Node latest-worker2 [[[ Ready=true, Network(available)=false, Taints=[{kubernetes.io/e2e-evict-taint-key evictTaintVal NoExecute 2021-03-21 23:46:10 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/master ]]]
Mar 21 23:46:57.384: INFO: ==== node wait: 2 out of 3 nodes are ready, max notReady allowed 0.  Need 1 more before starting.
Mar 21 23:47:28.511: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:47:30.684: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:47:33.162: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:47:34.568: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:47:36.580: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:47:38.552: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:47:40.523: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:47:42.963: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:47:44.513: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:47:46.639: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:47:48.627: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 21 23:47:49.019: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 21 23:47:51.097: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 21 23:47:53.082: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 21 23:47:57.320: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 21 23:47:57.320: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 21 23:47:57.560: INFO: Service node-port-service in namespace nettest-6683 found.
Mar 21 23:47:57.907: INFO: Service session-affinity-service in namespace nettest-6683 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 21 23:47:58.968: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 21 23:47:59.982: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: creating a second service with same selector
Mar 21 23:48:00.181: INFO: Service second-node-port-service in namespace nettest-6683 found.
Mar 21 23:48:01.218: INFO: Waiting for amount of service:second-node-port-service endpoints to be 2
STEP: dialing(http) netserver-0 (endpoint) --> 10.96.11.242:80 (config.clusterIP)
Mar 21 23:48:01.290: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=10.96.11.242&port=80&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:01.290: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:01.427: INFO: Waiting for responses: map[netserver-0:{}]
Mar 21 23:48:03.437: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=10.96.11.242&port=80&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:03.437: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:03.607: INFO: Waiting for responses: map[]
Mar 21 23:48:03.607: INFO: reached 10.96.11.242 after 1/34 tries
STEP: dialing(http) netserver-0 (endpoint) --> 172.18.0.9:30379 (nodeIP)
Mar 21 23:48:03.618: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30379&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:03.618: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:03.750: INFO: Waiting for responses: map[netserver-1:{}]
Mar 21 23:48:05.754: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30379&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:05.754: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:05.920: INFO: Waiting for responses: map[netserver-1:{}]
Mar 21 23:48:07.938: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30379&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:07.938: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:08.215: INFO: Waiting for responses: map[netserver-1:{}]
Mar 21 23:48:10.233: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30379&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:10.233: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:10.326: INFO: Waiting for responses: map[netserver-1:{}]
Mar 21 23:48:12.354: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30379&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:12.354: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:12.536: INFO: Waiting for responses: map[netserver-1:{}]
Mar 21 23:48:14.573: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30379&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:14.573: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:14.706: INFO: Waiting for responses: map[netserver-1:{}]
Mar 21 23:48:16.712: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30379&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:16.713: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:16.835: INFO: Waiting for responses: map[netserver-1:{}]
Mar 21 23:48:18.896: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30379&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:18.896: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:19.091: INFO: Waiting for responses: map[netserver-1:{}]
Mar 21 23:48:21.689: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30379&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:21.689: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:22.401: INFO: Waiting for responses: map[]
Mar 21 23:48:22.401: INFO: reached 172.18.0.9 after 8/34 tries
STEP: dialing(http) netserver-0 (endpoint) --> 10.96.88.173:80 (svc2.clusterIP)
Mar 21 23:48:22.407: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=10.96.88.173&port=80&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:22.407: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:23.016: INFO: Waiting for responses: map[netserver-1:{}]
Mar 21 23:48:25.048: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=10.96.88.173&port=80&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:25.048: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:25.154: INFO: Waiting for responses: map[]
Mar 21 23:48:25.154: INFO: reached 10.96.88.173 after 1/34 tries
STEP: dialing(http) netserver-0 (endpoint) --> 172.18.0.9:30415 (nodeIP)
Mar 21 23:48:25.161: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30415&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:25.161: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:25.306: INFO: Waiting for responses: map[netserver-0:{}]
Mar 21 23:48:27.336: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30415&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:27.336: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:27.455: INFO: Waiting for responses: map[netserver-0:{}]
Mar 21 23:48:29.699: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30415&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:29.699: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:30.268: INFO: Waiting for responses: map[netserver-0:{}]
Mar 21 23:48:32.304: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30415&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:32.304: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:32.562: INFO: Waiting for responses: map[]
Mar 21 23:48:32.563: INFO: reached 172.18.0.9 after 3/34 tries
STEP: deleting the original node port service
STEP: dialing(http) netserver-0 (endpoint) --> 10.96.88.173:80 (svc2.clusterIP)
Mar 21 23:48:48.225: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=10.96.88.173&port=80&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:48.225: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:48.342: INFO: Waiting for responses: map[netserver-1:{}]
Mar 21 23:48:50.395: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=10.96.88.173&port=80&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:50.395: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:50.541: INFO: Waiting for responses: map[netserver-1:{}]
Mar 21 23:48:52.580: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=10.96.88.173&port=80&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:52.580: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:52.750: INFO: Waiting for responses: map[netserver-1:{}]
Mar 21 23:48:54.778: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=10.96.88.173&port=80&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:54.778: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:54.965: INFO: Waiting for responses: map[netserver-1:{}]
Mar 21 23:48:57.156: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=10.96.88.173&port=80&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:57.156: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:57.395: INFO: Waiting for responses: map[netserver-1:{}]
Mar 21 23:48:59.406: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=10.96.88.173&port=80&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:59.406: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:48:59.716: INFO: Waiting for responses: map[]
Mar 21 23:48:59.716: INFO: reached 10.96.88.173 after 5/34 tries
STEP: dialing(http) netserver-0 (endpoint) --> 172.18.0.9:30415 (nodeIP)
Mar 21 23:48:59.966: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30415&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:48:59.966: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:49:00.147: INFO: Waiting for responses: map[netserver-1:{}]
Mar 21 23:49:02.197: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=30415&tries=1'] Namespace:nettest-6683 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:49:02.197: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:49:02.459: INFO: Waiting for responses: map[]
Mar 21 23:49:02.459: INFO: reached 172.18.0.9 after 1/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 21 23:49:02.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6683" for this suite.

• [SLOW TEST:155.758 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for multiple endpoint-Services with same selector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector","total":54,"completed":18,"skipped":3043,"failed":6,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 21 23:49:02.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
STEP: creating service-disabled in namespace services-4165
STEP: creating service service-proxy-disabled in namespace services-4165
STEP: creating replication controller service-proxy-disabled in namespace services-4165
I0321 23:49:04.666797       7 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-4165, replica count: 3
I0321 23:49:07.718235       7 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0321 23:49:10.718807       7 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0321 23:49:13.720260       7 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-4165
STEP: creating service service-proxy-toggled in namespace services-4165
STEP: creating replication controller service-proxy-toggled in namespace services-4165
I0321 23:49:13.856238       7 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-4165, replica count: 3
I0321 23:49:16.907196       7 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0321 23:49:19.907421       7 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Mar 21 23:49:20.012: INFO: Creating new host exec pod
Mar 21 23:49:20.242: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:49:22.414: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:49:24.272: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:49:26.247: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:49:28.245: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Mar 21 23:49:28.245: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Mar 21 23:49:34.325: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:49:34.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:49:35.665: INFO: rc: 1
Mar 21 23:49:35.665: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:49:35.665: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:49:40.665: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:49:40.665: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:49:41.818: INFO: rc: 1
Mar 21 23:49:41.818: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:49:41.818: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:49:46.820: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:49:46.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:49:47.329: INFO: rc: 1
Mar 21 23:49:47.329: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:49:47.329: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:49:52.329: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:49:52.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:49:52.526: INFO: rc: 1
Mar 21 23:49:52.526: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:49:52.526: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:49:57.527: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:49:57.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:49:57.621: INFO: rc: 1
Mar 21 23:49:57.621: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:49:57.621: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:50:02.621: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:50:02.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:50:02.724: INFO: rc: 1
Mar 21 23:50:02.724: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:50:02.724: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:50:07.725: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:50:07.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:50:07.824: INFO: rc: 1
Mar 21 23:50:07.824: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:50:07.824: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:50:12.825: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:50:12.825: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:50:12.965: INFO: rc: 1
Mar 21 23:50:12.965: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:50:12.965: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:50:17.966: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:50:17.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:50:18.163: INFO: rc: 1
Mar 21 23:50:18.163: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:50:18.163: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:50:23.163: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:50:23.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:50:23.428: INFO: rc: 1
Mar 21 23:50:23.428: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:50:23.428: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:50:28.428: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:50:28.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:50:28.577: INFO: rc: 1
Mar 21 23:50:28.578: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:50:28.578: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:50:33.578: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:50:33.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:50:34.244: INFO: rc: 1
Mar 21 23:50:34.244: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:50:34.244: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:50:39.245: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:50:39.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:50:43.291: INFO: rc: 1
Mar 21 23:50:43.291: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:50:43.291: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:50:48.292: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:50:48.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:50:48.479: INFO: rc: 1
Mar 21 23:50:48.479: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:50:48.479: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:50:53.479: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:50:53.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:50:53.708: INFO: rc: 1
Mar 21 23:50:53.708: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:50:53.709: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:50:58.709: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:50:58.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:50:59.225: INFO: rc: 1
Mar 21 23:50:59.225: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:50:59.225: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:51:04.226: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:51:04.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:51:04.840: INFO: rc: 1
Mar 21 23:51:04.840: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:51:04.840: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:51:09.840: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:51:09.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:51:10.150: INFO: rc: 1
Mar 21 23:51:10.150: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:51:10.150: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:51:15.151: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:51:15.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:51:15.538: INFO: rc: 1
Mar 21 23:51:15.538: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:51:15.538: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:51:20.539: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:51:20.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:51:20.757: INFO: rc: 1
Mar 21 23:51:20.757: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:51:20.757: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:51:25.757: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:51:25.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:51:25.927: INFO: rc: 1
Mar 21 23:51:25.927: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:51:25.927: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:51:30.929: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:51:30.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:51:31.119: INFO: rc: 1
Mar 21 23:51:31.119: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:51:31.119: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:51:36.120: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:51:36.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:51:36.279: INFO: rc: 1
Mar 21 23:51:36.279: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:51:36.279: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:51:41.280: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:51:41.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:51:41.449: INFO: rc: 1
Mar 21 23:51:41.449: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:51:41.449: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:51:46.451: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:51:46.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:51:46.913: INFO: rc: 1
Mar 21 23:51:46.913: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:51:46.913: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:51:51.915: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:51:51.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:51:52.445: INFO: rc: 1
Mar 21 23:51:52.445: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:51:52.445: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:51:57.446: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:51:57.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:51:57.586: INFO: rc: 1
Mar 21 23:51:57.586: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:51:57.586: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:52:02.586: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:52:02.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:52:02.798: INFO: rc: 1
Mar 21 23:52:02.798: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:52:02.798: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:52:07.798: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:52:07.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:52:07.983: INFO: rc: 1
Mar 21 23:52:07.983: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:52:07.983: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:52:12.985: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:52:12.985: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:52:13.141: INFO: rc: 1
Mar 21 23:52:13.141: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:52:13.141: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:52:18.141: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:52:18.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:52:18.300: INFO: rc: 1
Mar 21 23:52:18.300: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:52:18.300: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:52:23.301: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:52:23.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:52:23.473: INFO: rc: 1
Mar 21 23:52:23.473: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:52:23.473: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:52:28.473: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:52:28.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:52:28.613: INFO: rc: 1
Mar 21 23:52:28.613: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:52:28.613: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:52:33.614: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:52:33.614: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:52:33.830: INFO: rc: 1
Mar 21 23:52:33.830: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:52:33.830: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:52:38.830: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:52:38.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:52:39.616: INFO: rc: 1
Mar 21 23:52:39.616: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:52:39.616: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:52:44.617: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:52:44.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:52:44.767: INFO: rc: 1
Mar 21 23:52:44.767: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:52:44.767: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:52:49.768: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:52:49.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:52:50.339: INFO: rc: 1
Mar 21 23:52:50.339: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:52:50.339: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:52:55.339: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:52:55.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:52:55.594: INFO: rc: 1
Mar 21 23:52:55.594: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:52:55.594: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:53:00.594: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:53:00.594: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:53:00.784: INFO: rc: 1
Mar 21 23:53:00.784: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:53:00.784: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:53:05.784: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:53:05.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:53:05.919: INFO: rc: 1
Mar 21 23:53:05.919: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:53:05.919: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:53:10.920: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:53:10.921: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:53:11.425: INFO: rc: 1
Mar 21 23:53:11.425: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:53:11.425: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:53:16.426: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:53:16.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:53:16.706: INFO: rc: 1
Mar 21 23:53:16.707: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:53:16.707: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:53:21.708: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:53:21.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:53:22.089: INFO: rc: 1
Mar 21 23:53:22.089: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:53:22.089: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:53:27.090: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:53:27.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:53:27.678: INFO: rc: 1
Mar 21 23:53:27.678: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:53:27.678: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:53:32.679: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:53:32.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:53:32.860: INFO: rc: 1
Mar 21 23:53:32.860: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:53:32.860: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:53:37.860: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:53:37.860: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:53:37.988: INFO: rc: 1
Mar 21 23:53:37.988: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:53:37.988: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:53:42.988: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:53:42.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:53:43.160: INFO: rc: 1
Mar 21 23:53:43.160: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:53:43.160: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:53:48.161: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:53:48.161: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:53:48.323: INFO: rc: 1
Mar 21 23:53:48.323: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:53:48.323: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:53:53.324: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:53:53.324: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:53:53.602: INFO: rc: 1
Mar 21 23:53:53.602: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:53:53.602: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:53:58.603: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:53:58.603: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:53:58.891: INFO: rc: 1
Mar 21 23:53:58.891: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:53:58.891: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:54:03.891: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:54:03.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:54:04.024: INFO: rc: 1
Mar 21 23:54:04.024: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:54:04.024: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:54:09.025: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:54:09.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:54:09.379: INFO: rc: 1
Mar 21 23:54:09.379: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:54:09.379: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:54:14.380: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:54:14.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:54:14.533: INFO: rc: 1
Mar 21 23:54:14.533: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:54:14.533: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:54:19.534: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:54:19.534: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:54:19.651: INFO: rc: 1
Mar 21 23:54:19.651: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:54:19.651: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:54:24.652: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:54:24.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:54:24.833: INFO: rc: 1
Mar 21 23:54:24.833: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:54:24.833: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
Mar 21 23:54:29.834: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:54:29.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done'
Mar 21 23:54:29.985: INFO: rc: 1
Mar 21 23:54:29.985: INFO: error while kubectl execing "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done" in pod services-4165/verify-service-up-host-exec-pod: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4165 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.96.189.95:80 2>&1 || true; echo; done:
Command stdout:

stderr:
Error from server (NotFound): pods "verify-service-up-host-exec-pod" not found

error:
exit status 1
Output: 
Mar 21 23:54:29.985: INFO: Unable to reach the following endpoints of service 10.96.189.95: map[service-proxy-toggled-64tkf:{} service-proxy-toggled-d7fcz:{} service-proxy-toggled-nnqfc:{}]
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-4165
STEP: Deleting pod verify-service-up-exec-pod-sdwgb in namespace services-4165
Mar 21 23:54:35.236: FAIL: Unexpected error:
    <*errors.errorString | 0xc00493c370>: {
        s: "service verification failed for: 10.96.189.95\nexpected [service-proxy-toggled-64tkf service-proxy-toggled-d7fcz service-proxy-toggled-nnqfc]\nreceived []",
    }
    service verification failed for: 10.96.189.95
    expected [service-proxy-toggled-64tkf service-proxy-toggled-d7fcz service-proxy-toggled-nnqfc]
    received []
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.28()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1889 +0x5fa
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00386cd80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc00386cd80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc00386cd80, 0x6d60740)
	/usr/local/go/src/testing/testing.go:1194 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1239 +0x2b3
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-4165".
STEP: Found 65 events.
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:04 +0000 UTC - event for service-proxy-disabled: {replication-controller } SuccessfulCreate: Created pod: service-proxy-disabled-wrpvb
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:04 +0000 UTC - event for service-proxy-disabled: {replication-controller } SuccessfulCreate: Created pod: service-proxy-disabled-p996m
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:04 +0000 UTC - event for service-proxy-disabled: {replication-controller } SuccessfulCreate: Created pod: service-proxy-disabled-j2gv8
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:04 +0000 UTC - event for service-proxy-disabled-j2gv8: {default-scheduler } Scheduled: Successfully assigned services-4165/service-proxy-disabled-j2gv8 to latest-worker2
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:04 +0000 UTC - event for service-proxy-disabled-p996m: {default-scheduler } Scheduled: Successfully assigned services-4165/service-proxy-disabled-p996m to latest-worker
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:04 +0000 UTC - event for service-proxy-disabled-wrpvb: {default-scheduler } Scheduled: Successfully assigned services-4165/service-proxy-disabled-wrpvb to latest-worker
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:06 +0000 UTC - event for service-proxy-disabled-j2gv8: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:06 +0000 UTC - event for service-proxy-disabled-wrpvb: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:08 +0000 UTC - event for service-proxy-disabled-p996m: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:09 +0000 UTC - event for service-proxy-disabled-j2gv8: {kubelet latest-worker2} Created: Created container service-proxy-disabled
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:10 +0000 UTC - event for service-proxy-disabled-j2gv8: {kubelet latest-worker2} Started: Started container service-proxy-disabled
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:10 +0000 UTC - event for service-proxy-disabled-wrpvb: {kubelet latest-worker} Created: Created container service-proxy-disabled
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:10 +0000 UTC - event for service-proxy-disabled-wrpvb: {kubelet latest-worker} Started: Started container service-proxy-disabled
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:11 +0000 UTC - event for service-proxy-disabled-p996m: {kubelet latest-worker} Created: Created container service-proxy-disabled
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:11 +0000 UTC - event for service-proxy-disabled-p996m: {kubelet latest-worker} Started: Started container service-proxy-disabled
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:13 +0000 UTC - event for service-proxy-toggled: {replication-controller } SuccessfulCreate: Created pod: service-proxy-toggled-64tkf
Mar 21 23:54:35.378: INFO: At 2021-03-21 23:49:13 +0000 UTC - event for service-proxy-toggled: {replication-controller } SuccessfulCreate: Created pod: service-proxy-toggled-nnqfc
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:13 +0000 UTC - event for service-proxy-toggled: {replication-controller } SuccessfulCreate: Created pod: service-proxy-toggled-d7fcz
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:13 +0000 UTC - event for service-proxy-toggled-64tkf: {default-scheduler } Scheduled: Successfully assigned services-4165/service-proxy-toggled-64tkf to latest-worker2
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:13 +0000 UTC - event for service-proxy-toggled-d7fcz: {default-scheduler } Scheduled: Successfully assigned services-4165/service-proxy-toggled-d7fcz to latest-worker2
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:13 +0000 UTC - event for service-proxy-toggled-nnqfc: {default-scheduler } Scheduled: Successfully assigned services-4165/service-proxy-toggled-nnqfc to latest-worker
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:15 +0000 UTC - event for service-proxy-toggled-64tkf: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:16 +0000 UTC - event for service-proxy-toggled-d7fcz: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:16 +0000 UTC - event for service-proxy-toggled-nnqfc: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:17 +0000 UTC - event for service-proxy-toggled-64tkf: {kubelet latest-worker2} Created: Created container service-proxy-toggled
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:18 +0000 UTC - event for service-proxy-toggled-64tkf: {kubelet latest-worker2} Started: Started container service-proxy-toggled
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:18 +0000 UTC - event for service-proxy-toggled-d7fcz: {kubelet latest-worker2} Started: Started container service-proxy-toggled
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:18 +0000 UTC - event for service-proxy-toggled-d7fcz: {kubelet latest-worker2} Created: Created container service-proxy-toggled
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:18 +0000 UTC - event for service-proxy-toggled-nnqfc: {kubelet latest-worker} Started: Started container service-proxy-toggled
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:18 +0000 UTC - event for service-proxy-toggled-nnqfc: {kubelet latest-worker} Created: Created container service-proxy-toggled
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:19 +0000 UTC - event for service-proxy-toggled: {endpoint-slice-controller } FailedToUpdateEndpointSlices: Error updating Endpoint Slices for Service services-4165/service-proxy-toggled: failed to update service-proxy-toggled-v6hrz EndpointSlice for Service services-4165/service-proxy-toggled: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "service-proxy-toggled-v6hrz": the object has been modified; please apply your changes to the latest version and try again
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:20 +0000 UTC - event for verify-service-up-host-exec-pod: {default-scheduler } Scheduled: Successfully assigned services-4165/verify-service-up-host-exec-pod to latest-worker
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:21 +0000 UTC - event for verify-service-up-host-exec-pod: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:24 +0000 UTC - event for verify-service-up-host-exec-pod: {kubelet latest-worker} Created: Created container agnhost-container
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:26 +0000 UTC - event for verify-service-up-host-exec-pod: {kubelet latest-worker} Started: Started container agnhost-container
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:28 +0000 UTC - event for verify-service-up-exec-pod-sdwgb: {default-scheduler } Scheduled: Successfully assigned services-4165/verify-service-up-exec-pod-sdwgb to latest-worker
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:29 +0000 UTC - event for verify-service-up-exec-pod-sdwgb: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:31 +0000 UTC - event for verify-service-up-exec-pod-sdwgb: {kubelet latest-worker} Created: Created container agnhost-container
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:32 +0000 UTC - event for verify-service-up-exec-pod-sdwgb: {kubelet latest-worker} Started: Started container agnhost-container
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:34 +0000 UTC - event for service-proxy-disabled-p996m: {taint-controller } TaintManagerEviction: Marking for deletion Pod services-4165/service-proxy-disabled-p996m
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:34 +0000 UTC - event for service-proxy-disabled-p996m: {kubelet latest-worker} Killing: Stopping container service-proxy-disabled
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:34 +0000 UTC - event for service-proxy-disabled-wrpvb: {kubelet latest-worker} Killing: Stopping container service-proxy-disabled
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:34 +0000 UTC - event for service-proxy-disabled-wrpvb: {taint-controller } TaintManagerEviction: Marking for deletion Pod services-4165/service-proxy-disabled-wrpvb
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:34 +0000 UTC - event for service-proxy-toggled: {replication-controller } SuccessfulCreate: Created pod: service-proxy-toggled-kkztj
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:34 +0000 UTC - event for service-proxy-toggled-nnqfc: {taint-controller } TaintManagerEviction: Marking for deletion Pod services-4165/service-proxy-toggled-nnqfc
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:34 +0000 UTC - event for service-proxy-toggled-nnqfc: {kubelet latest-worker} Killing: Stopping container service-proxy-toggled
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:34 +0000 UTC - event for verify-service-up-exec-pod-sdwgb: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod services-4165/verify-service-up-exec-pod-sdwgb
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:34 +0000 UTC - event for verify-service-up-exec-pod-sdwgb: {kubelet latest-worker} Killing: Stopping container agnhost-container
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:34 +0000 UTC - event for verify-service-up-exec-pod-sdwgb: {taint-controller } TaintManagerEviction: Marking for deletion Pod services-4165/verify-service-up-exec-pod-sdwgb
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:34 +0000 UTC - event for verify-service-up-host-exec-pod: {taint-controller } TaintManagerEviction: Marking for deletion Pod services-4165/verify-service-up-host-exec-pod
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:34 +0000 UTC - event for verify-service-up-host-exec-pod: {kubelet latest-worker} Killing: Stopping container agnhost-container
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:35 +0000 UTC - event for service-proxy-disabled: {replication-controller } SuccessfulCreate: Created pod: service-proxy-disabled-86srx
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:35 +0000 UTC - event for service-proxy-disabled-86srx: {default-scheduler } Scheduled: Successfully assigned services-4165/service-proxy-disabled-86srx to latest-worker2
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:35 +0000 UTC - event for service-proxy-toggled-kkztj: {default-scheduler } Scheduled: Successfully assigned services-4165/service-proxy-toggled-kkztj to latest-worker2
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:36 +0000 UTC - event for service-proxy-disabled: {replication-controller } SuccessfulCreate: Created pod: service-proxy-disabled-tl9jg
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:36 +0000 UTC - event for service-proxy-disabled-tl9jg: {default-scheduler } Scheduled: Successfully assigned services-4165/service-proxy-disabled-tl9jg to latest-worker2
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:41 +0000 UTC - event for service-proxy-disabled-86srx: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:41 +0000 UTC - event for service-proxy-toggled-kkztj: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:42 +0000 UTC - event for service-proxy-disabled-tl9jg: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:45 +0000 UTC - event for service-proxy-toggled-kkztj: {kubelet latest-worker2} Created: Created container service-proxy-toggled
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:46 +0000 UTC - event for service-proxy-disabled-86srx: {kubelet latest-worker2} Started: Started container service-proxy-disabled
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:46 +0000 UTC - event for service-proxy-disabled-86srx: {kubelet latest-worker2} Created: Created container service-proxy-disabled
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:46 +0000 UTC - event for service-proxy-disabled-tl9jg: {kubelet latest-worker2} Created: Created container service-proxy-disabled
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:46 +0000 UTC - event for service-proxy-toggled-kkztj: {kubelet latest-worker2} Started: Started container service-proxy-toggled
Mar 21 23:54:35.379: INFO: At 2021-03-21 23:49:47 +0000 UTC - event for service-proxy-disabled-tl9jg: {kubelet latest-worker2} Started: Started container service-proxy-disabled
Mar 21 23:54:35.739: INFO: POD                           NODE            PHASE    GRACE  CONDITIONS
Mar 21 23:54:35.739: INFO: service-proxy-disabled-86srx  latest-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:35 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:35 +0000 UTC  }]
Mar 21 23:54:35.739: INFO: service-proxy-disabled-j2gv8  latest-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:04 +0000 UTC  }]
Mar 21 23:54:35.739: INFO: service-proxy-disabled-tl9jg  latest-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:36 +0000 UTC  }]
Mar 21 23:54:35.739: INFO: service-proxy-toggled-64tkf   latest-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:13 +0000 UTC  }]
Mar 21 23:54:35.739: INFO: service-proxy-toggled-d7fcz   latest-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:13 +0000 UTC  }]
Mar 21 23:54:35.739: INFO: service-proxy-toggled-kkztj   latest-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:35 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:49:34 +0000 UTC  }]
Mar 21 23:54:35.739: INFO: 
Mar 21 23:54:35.937: INFO: 
Logging node info for node latest-control-plane
Mar 21 23:54:36.142: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane    490b9532-4cb6-4803-8805-500c50bef538 6966847 0 2021-02-19 10:11:38 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:54:32 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:54:32 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:54:32 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:54:32 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 21 23:54:36.142: INFO: 
Logging kubelet events for node latest-control-plane
Mar 21 23:54:36.160: INFO: 
Logging pods the kubelet thinks is on node latest-control-plane
Mar 21 23:54:36.191: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:36.191: INFO: 	Container etcd ready: true, restart count 0
Mar 21 23:54:36.191: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:36.191: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 21 23:54:36.191: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:36.191: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 21 23:54:36.191: INFO: coredns-74ff55c5b-xcknl started at 2021-03-21 23:31:55 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:36.191: INFO: 	Container coredns ready: true, restart count 0
Mar 21 23:54:36.191: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:36.191: INFO: 	Container local-path-provisioner ready: true, restart count 0
Mar 21 23:54:36.191: INFO: coredns-74ff55c5b-7rm8b started at 2021-03-21 23:49:35 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:36.191: INFO: 	Container coredns ready: true, restart count 0
Mar 21 23:54:36.191: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:36.191: INFO: 	Container kube-controller-manager ready: true, restart count 0
Mar 21 23:54:36.191: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:36.191: INFO: 	Container kube-scheduler ready: true, restart count 0
Mar 21 23:54:36.191: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:36.191: INFO: 	Container kube-apiserver ready: true, restart count 0
W0321 23:54:36.226511       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 21 23:54:36.424: INFO: 
Latency metrics for node latest-control-plane
Mar 21 23:54:36.424: INFO: 
Logging node info for node latest-worker
Mar 21 23:54:36.568: INFO: Node Info: &Node{ObjectMeta:{latest-worker    52cd6d4b-d53f-435d-801a-04c2822dec44 6960140 0 2021-02-19 10:12:05 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:45:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:50:51 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:50:51 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:50:51 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:50:51 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 21 23:54:36.569: INFO: 
Logging kubelet events for node latest-worker
Mar 21 23:54:36.710: INFO: 
Logging pods the kubelet thinks is on node latest-worker
Mar 21 23:54:37.222: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:37.222: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 21 23:54:37.222: INFO: kindnet-g99fx started at 2021-03-21 23:50:18 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:37.222: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 21 23:54:37.222: INFO: startup-dcf22de0-3bf3-4ab5-b8ab-d7c897c14a4d started at 2021-03-21 23:54:29 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:37.222: INFO: 	Container busybox ready: false, restart count 0
Mar 21 23:54:37.222: INFO: dns-test-99543f5f-d539-4aa8-8110-3e65fb13fd1f started at 2021-03-21 23:54:32 +0000 UTC (0+3 container statuses recorded)
Mar 21 23:54:37.222: INFO: 	Container jessie-querier ready: false, restart count 0
Mar 21 23:54:37.222: INFO: 	Container querier ready: false, restart count 0
Mar 21 23:54:37.222: INFO: 	Container webserver ready: false, restart count 0
Mar 21 23:54:37.222: INFO: chaos-daemon-jxjgk started at 2021-03-21 23:50:17 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:37.222: INFO: 	Container chaos-daemon ready: true, restart count 0
W0321 23:54:37.309405       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 21 23:54:37.993: INFO: 
Latency metrics for node latest-worker
Mar 21 23:54:37.993: INFO: 
Logging node info for node latest-worker2
Mar 21 23:54:38.001: INFO: Node Info: &Node{ObjectMeta:{latest-worker2    7d2a1377-0c6f-45fb-899e-6c307ecb1803 6966386 0 2021-02-19 10:12:05 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:52:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:53:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:53:22 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:53:22 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:53:22 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:53:22 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 21 23:54:38.002: INFO: 
Logging kubelet events for node latest-worker2
Mar 21 23:54:38.048: INFO: 
Logging pods the kubelet thinks is on node latest-worker2
Mar 21 23:54:38.131: INFO: kindnet-gp4fv started at 2021-03-21 23:47:16 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:38.131: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 21 23:54:38.131: INFO: service-proxy-toggled-kkztj started at 2021-03-21 23:49:35 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:38.131: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Mar 21 23:54:38.131: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:38.131: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 21 23:54:38.131: INFO: service-proxy-toggled-64tkf started at 2021-03-21 23:49:13 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:38.131: INFO: 	Container service-proxy-toggled ready: true, restart count 0
Mar 21 23:54:38.131: INFO: chaos-daemon-95pmt started at 2021-03-21 23:47:16 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:38.131: INFO: 	Container chaos-daemon ready: true, restart count 0
Mar 21 23:54:38.131: INFO: service-proxy-disabled-j2gv8 started at 2021-03-21 23:49:04 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:38.131: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Mar 21 23:54:38.131: INFO: service-proxy-disabled-86srx started at 2021-03-21 23:49:35 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:38.131: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Mar 21 23:54:38.131: INFO: chaos-controller-manager-69c479c674-k8l6r started at 2021-03-21 23:49:35 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:38.131: INFO: 	Container chaos-mesh ready: true, restart count 0
Mar 21 23:54:38.131: INFO: service-proxy-disabled-tl9jg started at 2021-03-21 23:49:36 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:38.131: INFO: 	Container service-proxy-disabled ready: true, restart count 0
Mar 21 23:54:38.131: INFO: csi-mockplugin-attacher-0 started at 2021-03-21 23:52:36 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:38.131: INFO: 	Container csi-attacher ready: false, restart count 0
Mar 21 23:54:38.131: INFO: csi-mockplugin-0 started at 2021-03-21 23:52:36 +0000 UTC (0+3 container statuses recorded)
Mar 21 23:54:38.131: INFO: 	Container csi-provisioner ready: false, restart count 0
Mar 21 23:54:38.131: INFO: 	Container driver-registrar ready: false, restart count 0
Mar 21 23:54:38.131: INFO: 	Container mock ready: false, restart count 0
Mar 21 23:54:38.131: INFO: service-proxy-toggled-d7fcz started at 2021-03-21 23:49:13 +0000 UTC (0+1 container statuses recorded)
Mar 21 23:54:38.131: INFO: 	Container service-proxy-toggled ready: true, restart count 0
W0321 23:54:38.167814       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 21 23:54:38.480: INFO: 
Latency metrics for node latest-worker2
Mar 21 23:54:38.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4165" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

• Failure [336.051 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865

  Mar 21 23:54:35.236: Unexpected error:
      <*errors.errorString | 0xc00493c370>: {
          s: "service verification failed for: 10.96.189.95\nexpected [service-proxy-toggled-64tkf service-proxy-toggled-d7fcz service-proxy-toggled-nnqfc]\nreceived []",
      }
      service verification failed for: 10.96.189.95
      expected [service-proxy-toggled-64tkf service-proxy-toggled-d7fcz service-proxy-toggled-nnqfc]
      received []
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1889
------------------------------
{"msg":"FAILED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":54,"completed":18,"skipped":3143,"failed":7,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should function for pod-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 21 23:54:38.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
STEP: Performing setup for networking test in namespace nettest-8589
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 21 23:54:39.333: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 21 23:54:39.566: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:54:42.351: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:54:43.981: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:54:45.570: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:54:47.669: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:54:49.917: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:54:51.643: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:54:53.597: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:54:55.580: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:54:57.591: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:54:59.580: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:55:01.591: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 21 23:55:01.683: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 21 23:55:08.124: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 21 23:55:08.124: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 21 23:55:08.883: INFO: Service node-port-service in namespace nettest-8589 found.
Mar 21 23:55:09.461: INFO: Service session-affinity-service in namespace nettest-8589 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 21 23:55:10.538: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 21 23:55:11.646: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) test-container-pod --> 10.96.97.117:90 (config.clusterIP)
Mar 21 23:55:11.717: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.227:9080/dial?request=hostname&protocol=udp&host=10.96.97.117&port=90&tries=1'] Namespace:nettest-8589 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:55:11.717: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:55:11.875: INFO: Waiting for responses: map[netserver-1:{}]
Mar 21 23:55:13.895: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.227:9080/dial?request=hostname&protocol=udp&host=10.96.97.117&port=90&tries=1'] Namespace:nettest-8589 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:55:13.895: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:55:14.126: INFO: Waiting for responses: map[]
Mar 21 23:55:14.126: INFO: reached 10.96.97.117 after 1/34 tries
STEP: dialing(udp) test-container-pod --> 172.18.0.9:30730 (nodeIP)
Mar 21 23:55:14.232: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.227:9080/dial?request=hostname&protocol=udp&host=172.18.0.9&port=30730&tries=1'] Namespace:nettest-8589 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:55:14.232: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:55:14.467: INFO: Waiting for responses: map[netserver-0:{}]
Mar 21 23:55:16.484: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.227:9080/dial?request=hostname&protocol=udp&host=172.18.0.9&port=30730&tries=1'] Namespace:nettest-8589 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:55:16.484: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:55:16.582: INFO: Waiting for responses: map[]
Mar 21 23:55:16.582: INFO: reached 172.18.0.9 after 1/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 21 23:55:16.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8589" for this suite.

• [SLOW TEST:37.867 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: udp
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: udp","total":54,"completed":19,"skipped":3753,"failed":7,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Firewall rule 
  should have correct firewall rules for e2e cluster
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 21 23:55:16.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Mar 21 23:55:17.003: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 21 23:55:17.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-6699" for this suite.

S [SKIPPING] in Spec Setup (BeforeEach) [0.362 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have correct firewall rules for e2e cluster [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Conntrack 
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 21 23:55:17.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
Mar 21 23:55:17.519: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:55:19.910: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:55:21.706: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:55:23.604: INFO: The status of Pod boom-server is Running (Ready = true)
STEP: Server pod created on node latest-worker2
STEP: Server service created
Mar 21 23:55:23.829: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:55:26.150: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:55:27.890: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:55:30.106: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:55:31.867: INFO: The status of Pod startup-script is Running (Ready = true)
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
Mar 21 23:56:33.267: INFO: boom-server pod logs: 2021/03/21 23:55:21 external ip: 10.244.1.6
2021/03/21 23:55:21 listen on 0.0.0.0:9000
2021/03/21 23:55:21 probing 10.244.1.6
2021/03/21 23:55:30 tcp packet: &{SrcPort:38611 DestPort:9000 Seq:2832462850 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:55:30 tcp packet: &{SrcPort:38611 DestPort:9000 Seq:2832462851 Ack:4215559984 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:30 connection established
2021/03/21 23:55:30 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 150 211 251 66 208 144 168 211 244 3 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:55:30 checksumer: &{sum:501626 oddByte:33 length:39}
2021/03/21 23:55:30 ret:  501659
2021/03/21 23:55:30 ret:  42914
2021/03/21 23:55:30 ret:  42914
2021/03/21 23:55:30 boom packet injected
2021/03/21 23:55:30 tcp packet: &{SrcPort:38611 DestPort:9000 Seq:2832462851 Ack:4215559984 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:32 tcp packet: &{SrcPort:32971 DestPort:9000 Seq:2516013066 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:55:32 tcp packet: &{SrcPort:32971 DestPort:9000 Seq:2516013067 Ack:725590531 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:32 connection established
2021/03/21 23:55:32 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 128 203 43 62 27 99 149 247 80 11 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:55:32 checksumer: &{sum:497704 oddByte:33 length:39}
2021/03/21 23:55:32 ret:  497737
2021/03/21 23:55:32 ret:  38992
2021/03/21 23:55:32 ret:  38992
2021/03/21 23:55:32 boom packet injected
2021/03/21 23:55:32 tcp packet: &{SrcPort:32971 DestPort:9000 Seq:2516013067 Ack:725590531 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:34 tcp packet: &{SrcPort:33555 DestPort:9000 Seq:606129259 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:55:34 tcp packet: &{SrcPort:33555 DestPort:9000 Seq:606129260 Ack:2098788292 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:34 connection established
2021/03/21 23:55:34 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 131 19 125 23 113 36 36 32 204 108 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:55:34 checksumer: &{sum:394462 oddByte:33 length:39}
2021/03/21 23:55:34 ret:  394495
2021/03/21 23:55:34 ret:  1285
2021/03/21 23:55:34 ret:  1285
2021/03/21 23:55:34 boom packet injected
2021/03/21 23:55:34 tcp packet: &{SrcPort:33555 DestPort:9000 Seq:606129260 Ack:2098788292 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:36 tcp packet: &{SrcPort:44627 DestPort:9000 Seq:518736768 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:55:36 tcp packet: &{SrcPort:44627 DestPort:9000 Seq:518736769 Ack:3979782779 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:36 connection established
2021/03/21 23:55:36 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 174 83 237 53 35 219 30 235 75 129 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:55:36 checksumer: &{sum:522660 oddByte:33 length:39}
2021/03/21 23:55:36 ret:  522693
2021/03/21 23:55:36 ret:  63948
2021/03/21 23:55:36 ret:  63948
2021/03/21 23:55:36 boom packet injected
2021/03/21 23:55:36 tcp packet: &{SrcPort:44627 DestPort:9000 Seq:518736769 Ack:3979782779 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:38 tcp packet: &{SrcPort:35215 DestPort:9000 Seq:140322445 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:55:38 tcp packet: &{SrcPort:35215 DestPort:9000 Seq:140322446 Ack:17770409 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:38 connection established
2021/03/21 23:55:38 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 137 143 1 13 161 9 8 93 38 142 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:55:38 checksumer: &{sum:440790 oddByte:33 length:39}
2021/03/21 23:55:38 ret:  440823
2021/03/21 23:55:38 ret:  47613
2021/03/21 23:55:38 ret:  47613
2021/03/21 23:55:38 boom packet injected
2021/03/21 23:55:38 tcp packet: &{SrcPort:35215 DestPort:9000 Seq:140322446 Ack:17770409 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:40 tcp packet: &{SrcPort:38611 DestPort:9000 Seq:2832462852 Ack:4215559985 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:40 tcp packet: &{SrcPort:40595 DestPort:9000 Seq:2979156605 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:55:40 tcp packet: &{SrcPort:40595 DestPort:9000 Seq:2979156606 Ack:3355606383 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:40 connection established
2021/03/21 23:55:40 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 158 147 200 0 246 207 177 146 82 126 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:55:40 checksumer: &{sum:499164 oddByte:33 length:39}
2021/03/21 23:55:40 ret:  499197
2021/03/21 23:55:40 ret:  40452
2021/03/21 23:55:40 ret:  40452
2021/03/21 23:55:40 boom packet injected
2021/03/21 23:55:40 tcp packet: &{SrcPort:40595 DestPort:9000 Seq:2979156606 Ack:3355606383 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:42 tcp packet: &{SrcPort:32971 DestPort:9000 Seq:2516013068 Ack:725590532 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:42 tcp packet: &{SrcPort:38003 DestPort:9000 Seq:2491376616 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:55:42 tcp packet: &{SrcPort:38003 DestPort:9000 Seq:2491376617 Ack:2936608596 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:42 connection established
2021/03/21 23:55:42 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 148 115 175 7 144 180 148 127 99 233 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:55:42 checksumer: &{sum:508231 oddByte:33 length:39}
2021/03/21 23:55:42 ret:  508264
2021/03/21 23:55:42 ret:  49519
2021/03/21 23:55:42 ret:  49519
2021/03/21 23:55:42 boom packet injected
2021/03/21 23:55:42 tcp packet: &{SrcPort:38003 DestPort:9000 Seq:2491376617 Ack:2936608596 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:44 tcp packet: &{SrcPort:33555 DestPort:9000 Seq:606129261 Ack:2098788293 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:44 tcp packet: &{SrcPort:36431 DestPort:9000 Seq:2430312126 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:55:44 tcp packet: &{SrcPort:36431 DestPort:9000 Seq:2430312127 Ack:1975164056 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:44 connection established
2021/03/21 23:55:44 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 142 79 117 185 21 248 144 219 158 191 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:55:44 checksumer: &{sum:574659 oddByte:33 length:39}
2021/03/21 23:55:44 ret:  574692
2021/03/21 23:55:44 ret:  50412
2021/03/21 23:55:44 ret:  50412
2021/03/21 23:55:44 boom packet injected
2021/03/21 23:55:44 tcp packet: &{SrcPort:36431 DestPort:9000 Seq:2430312127 Ack:1975164056 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:46 tcp packet: &{SrcPort:44627 DestPort:9000 Seq:518736770 Ack:3979782780 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:46 tcp packet: &{SrcPort:40479 DestPort:9000 Seq:2285452605 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:55:46 tcp packet: &{SrcPort:40479 DestPort:9000 Seq:2285452606 Ack:2329973649 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:46 connection established
2021/03/21 23:55:46 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 158 31 138 223 12 241 136 57 61 62 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:55:46 checksumer: &{sum:495734 oddByte:33 length:39}
2021/03/21 23:55:46 ret:  495767
2021/03/21 23:55:46 ret:  37022
2021/03/21 23:55:46 ret:  37022
2021/03/21 23:55:46 boom packet injected
2021/03/21 23:55:46 tcp packet: &{SrcPort:40479 DestPort:9000 Seq:2285452606 Ack:2329973649 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:48 tcp packet: &{SrcPort:35215 DestPort:9000 Seq:140322447 Ack:17770410 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:48 tcp packet: &{SrcPort:40415 DestPort:9000 Seq:2720598637 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:55:48 tcp packet: &{SrcPort:40415 DestPort:9000 Seq:2720598638 Ack:3112511478 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:48 connection established
2021/03/21 23:55:48 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 157 223 185 131 161 86 162 41 10 110 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:55:48 checksumer: &{sum:490016 oddByte:33 length:39}
2021/03/21 23:55:48 ret:  490049
2021/03/21 23:55:48 ret:  31304
2021/03/21 23:55:48 ret:  31304
2021/03/21 23:55:48 boom packet injected
2021/03/21 23:55:48 tcp packet: &{SrcPort:40415 DestPort:9000 Seq:2720598638 Ack:3112511478 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:50 tcp packet: &{SrcPort:40595 DestPort:9000 Seq:2979156607 Ack:3355606384 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:50 tcp packet: &{SrcPort:34751 DestPort:9000 Seq:4249056419 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:55:50 tcp packet: &{SrcPort:34751 DestPort:9000 Seq:4249056420 Ack:895955469 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:50 connection established
2021/03/21 23:55:50 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 135 191 53 101 171 109 253 67 116 164 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:55:50 checksumer: &{sum:500565 oddByte:33 length:39}
2021/03/21 23:55:50 ret:  500598
2021/03/21 23:55:50 ret:  41853
2021/03/21 23:55:50 ret:  41853
2021/03/21 23:55:50 boom packet injected
2021/03/21 23:55:50 tcp packet: &{SrcPort:34751 DestPort:9000 Seq:4249056420 Ack:895955469 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:52 tcp packet: &{SrcPort:38003 DestPort:9000 Seq:2491376618 Ack:2936608597 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:52 tcp packet: &{SrcPort:42745 DestPort:9000 Seq:2725481026 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:55:52 tcp packet: &{SrcPort:42745 DestPort:9000 Seq:2725481027 Ack:3513431631 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:52 connection established
2021/03/21 23:55:52 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 166 249 209 105 47 175 162 115 138 67 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:55:52 checksumer: &{sum:520783 oddByte:33 length:39}
2021/03/21 23:55:52 ret:  520816
2021/03/21 23:55:52 ret:  62071
2021/03/21 23:55:52 ret:  62071
2021/03/21 23:55:52 boom packet injected
2021/03/21 23:55:52 tcp packet: &{SrcPort:42745 DestPort:9000 Seq:2725481027 Ack:3513431631 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:54 tcp packet: &{SrcPort:36431 DestPort:9000 Seq:2430312128 Ack:1975164057 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:54 tcp packet: &{SrcPort:41575 DestPort:9000 Seq:4010559568 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:55:54 tcp packet: &{SrcPort:41575 DestPort:9000 Seq:4010559569 Ack:3436430435 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:54 connection established
2021/03/21 23:55:54 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 162 103 204 210 61 195 239 12 72 81 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:55:54 checksumer: &{sum:492639 oddByte:33 length:39}
2021/03/21 23:55:54 ret:  492672
2021/03/21 23:55:54 ret:  33927
2021/03/21 23:55:54 ret:  33927
2021/03/21 23:55:54 boom packet injected
2021/03/21 23:55:54 tcp packet: &{SrcPort:41575 DestPort:9000 Seq:4010559569 Ack:3436430435 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:56 tcp packet: &{SrcPort:40479 DestPort:9000 Seq:2285452607 Ack:2329973650 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:56 tcp packet: &{SrcPort:35157 DestPort:9000 Seq:732111985 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:55:56 tcp packet: &{SrcPort:35157 DestPort:9000 Seq:732111986 Ack:3835172436 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:56 connection established
2021/03/21 23:55:56 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 137 85 228 150 143 180 43 163 36 114 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:55:56 checksumer: &{sum:515784 oddByte:33 length:39}
2021/03/21 23:55:56 ret:  515817
2021/03/21 23:55:56 ret:  57072
2021/03/21 23:55:56 ret:  57072
2021/03/21 23:55:56 boom packet injected
2021/03/21 23:55:56 tcp packet: &{SrcPort:35157 DestPort:9000 Seq:732111986 Ack:3835172436 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:58 tcp packet: &{SrcPort:40415 DestPort:9000 Seq:2720598639 Ack:3112511479 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:58 tcp packet: &{SrcPort:35587 DestPort:9000 Seq:684073389 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:55:58 tcp packet: &{SrcPort:35587 DestPort:9000 Seq:684073390 Ack:2472365135 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:55:58 connection established
2021/03/21 23:55:58 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 139 3 147 91 197 175 40 198 33 174 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:55:58 checksumer: &{sum:502697 oddByte:33 length:39}
2021/03/21 23:55:58 ret:  502730
2021/03/21 23:55:58 ret:  43985
2021/03/21 23:55:58 ret:  43985
2021/03/21 23:55:58 boom packet injected
2021/03/21 23:55:58 tcp packet: &{SrcPort:35587 DestPort:9000 Seq:684073390 Ack:2472365135 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:00 tcp packet: &{SrcPort:34751 DestPort:9000 Seq:4249056421 Ack:895955470 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:00 tcp packet: &{SrcPort:35433 DestPort:9000 Seq:2672344517 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:00 tcp packet: &{SrcPort:35433 DestPort:9000 Seq:2672344518 Ack:1590509371 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:00 connection established
2021/03/21 23:56:00 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 138 105 94 203 184 155 159 72 189 198 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:00 checksumer: &{sum:526457 oddByte:33 length:39}
2021/03/21 23:56:00 ret:  526490
2021/03/21 23:56:00 ret:  2210
2021/03/21 23:56:00 ret:  2210
2021/03/21 23:56:00 boom packet injected
2021/03/21 23:56:00 tcp packet: &{SrcPort:35433 DestPort:9000 Seq:2672344518 Ack:1590509371 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:02 tcp packet: &{SrcPort:42745 DestPort:9000 Seq:2725481028 Ack:3513431632 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:02 tcp packet: &{SrcPort:36649 DestPort:9000 Seq:1507859785 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:02 tcp packet: &{SrcPort:36649 DestPort:9000 Seq:1507859786 Ack:3386232414 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:02 connection established
2021/03/21 23:56:02 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 143 41 201 212 71 190 89 224 29 74 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:02 checksumer: &{sum:528274 oddByte:33 length:39}
2021/03/21 23:56:02 ret:  528307
2021/03/21 23:56:02 ret:  4027
2021/03/21 23:56:02 ret:  4027
2021/03/21 23:56:02 boom packet injected
2021/03/21 23:56:02 tcp packet: &{SrcPort:36649 DestPort:9000 Seq:1507859786 Ack:3386232414 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:04 tcp packet: &{SrcPort:41575 DestPort:9000 Seq:4010559570 Ack:3436430436 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:04 tcp packet: &{SrcPort:39147 DestPort:9000 Seq:1405131308 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:04 tcp packet: &{SrcPort:39147 DestPort:9000 Seq:1405131309 Ack:2099513121 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:04 connection established
2021/03/21 23:56:04 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 152 235 125 34 128 129 83 192 154 45 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:04 checksumer: &{sum:501247 oddByte:33 length:39}
2021/03/21 23:56:04 ret:  501280
2021/03/21 23:56:04 ret:  42535
2021/03/21 23:56:04 ret:  42535
2021/03/21 23:56:04 boom packet injected
2021/03/21 23:56:04 tcp packet: &{SrcPort:39147 DestPort:9000 Seq:1405131309 Ack:2099513121 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:06 tcp packet: &{SrcPort:35157 DestPort:9000 Seq:732111987 Ack:3835172437 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:06 tcp packet: &{SrcPort:40675 DestPort:9000 Seq:3122255124 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:06 tcp packet: &{SrcPort:40675 DestPort:9000 Seq:3122255125 Ack:2561890564 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:06 connection established
2021/03/21 23:56:06 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 158 227 152 177 210 100 186 25 213 21 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:06 checksumer: &{sum:479764 oddByte:33 length:39}
2021/03/21 23:56:06 ret:  479797
2021/03/21 23:56:06 ret:  21052
2021/03/21 23:56:06 ret:  21052
2021/03/21 23:56:06 boom packet injected
2021/03/21 23:56:06 tcp packet: &{SrcPort:40675 DestPort:9000 Seq:3122255125 Ack:2561890564 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:08 tcp packet: &{SrcPort:35587 DestPort:9000 Seq:684073391 Ack:2472365136 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:08 tcp packet: &{SrcPort:40799 DestPort:9000 Seq:1818464779 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:08 tcp packet: &{SrcPort:40799 DestPort:9000 Seq:1818464780 Ack:2682282621 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:08 connection established
2021/03/21 23:56:08 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 159 95 159 222 219 221 108 99 146 12 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:08 checksumer: &{sum:504980 oddByte:33 length:39}
2021/03/21 23:56:08 ret:  505013
2021/03/21 23:56:08 ret:  46268
2021/03/21 23:56:08 ret:  46268
2021/03/21 23:56:08 boom packet injected
2021/03/21 23:56:08 tcp packet: &{SrcPort:40799 DestPort:9000 Seq:1818464780 Ack:2682282621 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:10 tcp packet: &{SrcPort:35433 DestPort:9000 Seq:2672344519 Ack:1590509372 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:10 tcp packet: &{SrcPort:43241 DestPort:9000 Seq:3028492331 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:10 tcp packet: &{SrcPort:43241 DestPort:9000 Seq:3028492332 Ack:1879670840 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:10 connection established
2021/03/21 23:56:10 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 168 233 112 7 249 152 180 131 32 44 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:10 checksumer: &{sum:483938 oddByte:33 length:39}
2021/03/21 23:56:10 ret:  483971
2021/03/21 23:56:10 ret:  25226
2021/03/21 23:56:10 ret:  25226
2021/03/21 23:56:10 boom packet injected
2021/03/21 23:56:10 tcp packet: &{SrcPort:43241 DestPort:9000 Seq:3028492332 Ack:1879670840 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:12 tcp packet: &{SrcPort:36649 DestPort:9000 Seq:1507859787 Ack:3386232415 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:12 tcp packet: &{SrcPort:43819 DestPort:9000 Seq:1819343943 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:12 tcp packet: &{SrcPort:43819 DestPort:9000 Seq:1819343944 Ack:1998757925 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:12 connection established
2021/03/21 23:56:12 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 171 43 119 33 25 133 108 112 252 72 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:12 checksumer: &{sum:439328 oddByte:33 length:39}
2021/03/21 23:56:12 ret:  439361
2021/03/21 23:56:12 ret:  46151
2021/03/21 23:56:12 ret:  46151
2021/03/21 23:56:12 boom packet injected
2021/03/21 23:56:12 tcp packet: &{SrcPort:43819 DestPort:9000 Seq:1819343944 Ack:1998757925 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:14 tcp packet: &{SrcPort:39147 DestPort:9000 Seq:1405131310 Ack:2099513122 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:14 tcp packet: &{SrcPort:41343 DestPort:9000 Seq:3312912018 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:14 tcp packet: &{SrcPort:41343 DestPort:9000 Seq:3312912019 Ack:426442190 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:14 connection established
2021/03/21 23:56:14 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 161 127 25 105 119 46 197 119 6 147 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:14 checksumer: &{sum:477817 oddByte:33 length:39}
2021/03/21 23:56:14 ret:  477850
2021/03/21 23:56:14 ret:  19105
2021/03/21 23:56:14 ret:  19105
2021/03/21 23:56:14 boom packet injected
2021/03/21 23:56:14 tcp packet: &{SrcPort:41343 DestPort:9000 Seq:3312912019 Ack:426442190 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:16 tcp packet: &{SrcPort:40675 DestPort:9000 Seq:3122255126 Ack:2561890565 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:16 tcp packet: &{SrcPort:33663 DestPort:9000 Seq:3594768641 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:16 tcp packet: &{SrcPort:33663 DestPort:9000 Seq:3594768642 Ack:2889076525 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:16 connection established
2021/03/21 23:56:16 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 131 127 172 50 72 141 214 67 209 2 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:16 checksumer: &{sum:437915 oddByte:33 length:39}
2021/03/21 23:56:16 ret:  437948
2021/03/21 23:56:16 ret:  44738
2021/03/21 23:56:16 ret:  44738
2021/03/21 23:56:16 boom packet injected
2021/03/21 23:56:16 tcp packet: &{SrcPort:33663 DestPort:9000 Seq:3594768642 Ack:2889076525 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:18 tcp packet: &{SrcPort:40799 DestPort:9000 Seq:1818464781 Ack:2682282622 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:18 tcp packet: &{SrcPort:35541 DestPort:9000 Seq:3216849687 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:18 tcp packet: &{SrcPort:35541 DestPort:9000 Seq:3216849688 Ack:4015662592 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:18 connection established
2021/03/21 23:56:18 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 138 213 239 88 159 96 191 189 59 24 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:18 checksumer: &{sum:494991 oddByte:33 length:39}
2021/03/21 23:56:18 ret:  495024
2021/03/21 23:56:18 ret:  36279
2021/03/21 23:56:18 ret:  36279
2021/03/21 23:56:18 boom packet injected
2021/03/21 23:56:18 tcp packet: &{SrcPort:35541 DestPort:9000 Seq:3216849688 Ack:4015662592 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:20 tcp packet: &{SrcPort:43241 DestPort:9000 Seq:3028492333 Ack:1879670841 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:20 tcp packet: &{SrcPort:45761 DestPort:9000 Seq:2098130301 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:20 tcp packet: &{SrcPort:45761 DestPort:9000 Seq:2098130302 Ack:2030488307 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:20 connection established
2021/03/21 23:56:20 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 178 193 121 5 68 83 125 14 237 126 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:20 checksumer: &{sum:446550 oddByte:33 length:39}
2021/03/21 23:56:20 ret:  446583
2021/03/21 23:56:20 ret:  53373
2021/03/21 23:56:20 ret:  53373
2021/03/21 23:56:20 boom packet injected
2021/03/21 23:56:20 tcp packet: &{SrcPort:45761 DestPort:9000 Seq:2098130302 Ack:2030488307 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:22 tcp packet: &{SrcPort:43819 DestPort:9000 Seq:1819343945 Ack:1998757926 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:22 tcp packet: &{SrcPort:46303 DestPort:9000 Seq:899496356 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:22 tcp packet: &{SrcPort:46303 DestPort:9000 Seq:899496357 Ack:3008533020 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:22 connection established
2021/03/21 23:56:22 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 180 223 179 81 11 124 53 157 57 165 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:22 checksumer: &{sum:530525 oddByte:33 length:39}
2021/03/21 23:56:22 ret:  530558
2021/03/21 23:56:22 ret:  6278
2021/03/21 23:56:22 ret:  6278
2021/03/21 23:56:22 boom packet injected
2021/03/21 23:56:22 tcp packet: &{SrcPort:46303 DestPort:9000 Seq:899496357 Ack:3008533020 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:24 tcp packet: &{SrcPort:41343 DestPort:9000 Seq:3312912020 Ack:426442191 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:24 tcp packet: &{SrcPort:33961 DestPort:9000 Seq:2426052707 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:24 tcp packet: &{SrcPort:33961 DestPort:9000 Seq:2426052708 Ack:2690590309 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:24 connection established
2021/03/21 23:56:24 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 132 169 160 93 159 197 144 154 160 100 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:24 checksumer: &{sum:521328 oddByte:33 length:39}
2021/03/21 23:56:24 ret:  521361
2021/03/21 23:56:24 ret:  62616
2021/03/21 23:56:24 ret:  62616
2021/03/21 23:56:24 boom packet injected
2021/03/21 23:56:24 tcp packet: &{SrcPort:33961 DestPort:9000 Seq:2426052708 Ack:2690590309 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:26 tcp packet: &{SrcPort:33663 DestPort:9000 Seq:3594768643 Ack:2889076526 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:26 tcp packet: &{SrcPort:36415 DestPort:9000 Seq:1329698081 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:26 tcp packet: &{SrcPort:36415 DestPort:9000 Seq:1329698082 Ack:1124980128 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:26 connection established
2021/03/21 23:56:26 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 142 63 67 12 79 0 79 65 149 34 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:26 checksumer: &{sum:383105 oddByte:33 length:39}
2021/03/21 23:56:26 ret:  383138
2021/03/21 23:56:26 ret:  55463
2021/03/21 23:56:26 ret:  55463
2021/03/21 23:56:26 boom packet injected
2021/03/21 23:56:26 tcp packet: &{SrcPort:36415 DestPort:9000 Seq:1329698082 Ack:1124980128 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:28 tcp packet: &{SrcPort:35541 DestPort:9000 Seq:3216849689 Ack:4015662593 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:28 tcp packet: &{SrcPort:35037 DestPort:9000 Seq:4238476647 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:28 tcp packet: &{SrcPort:35037 DestPort:9000 Seq:4238476648 Ack:962324430 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:28 connection established
2021/03/21 23:56:28 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 136 221 57 90 97 46 252 162 5 104 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:28 checksumer: &{sum:498080 oddByte:33 length:39}
2021/03/21 23:56:28 ret:  498113
2021/03/21 23:56:28 ret:  39368
2021/03/21 23:56:28 ret:  39368
2021/03/21 23:56:28 boom packet injected
2021/03/21 23:56:28 tcp packet: &{SrcPort:35037 DestPort:9000 Seq:4238476648 Ack:962324430 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:30 tcp packet: &{SrcPort:45761 DestPort:9000 Seq:2098130303 Ack:2030488308 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:30 tcp packet: &{SrcPort:33265 DestPort:9000 Seq:2281495573 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:30 tcp packet: &{SrcPort:33265 DestPort:9000 Seq:2281495574 Ack:1087291349 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:30 connection established
2021/03/21 23:56:30 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 129 241 64 205 57 53 135 252 220 22 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:30 checksumer: &{sum:536538 oddByte:33 length:39}
2021/03/21 23:56:30 ret:  536571
2021/03/21 23:56:30 ret:  12291
2021/03/21 23:56:30 ret:  12291
2021/03/21 23:56:30 boom packet injected
2021/03/21 23:56:30 tcp packet: &{SrcPort:33265 DestPort:9000 Seq:2281495574 Ack:1087291349 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:32 tcp packet: &{SrcPort:46303 DestPort:9000 Seq:899496358 Ack:3008533021 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:32 tcp packet: &{SrcPort:46025 DestPort:9000 Seq:3405321800 Ack:0 Flags:40962 WindowSize:64240 Checksum:6657 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.2.229
2021/03/21 23:56:32 tcp packet: &{SrcPort:46025 DestPort:9000 Seq:3405321801 Ack:1060086252 Flags:32784 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.2.229
2021/03/21 23:56:32 connection established
2021/03/21 23:56:32 calling checksumTCP: 10.244.1.6 10.244.2.229 [35 40 179 201 63 46 27 76 202 249 22 73 80 24 1 246 0 0 0 0] [98 111 111 109 33 33 33]
2021/03/21 23:56:32 checksumer: &{sum:503658 oddByte:33 length:39}
2021/03/21 23:56:32 ret:  503691
2021/03/21 23:56:32 ret:  44946
2021/03/21 23:56:32 ret:  44946
2021/03/21 23:56:32 boom packet injected
2021/03/21 23:56:32 tcp packet: &{SrcPort:46025 DestPort:9000 Seq:3405321801 Ack:1060086252 Flags:32785 WindowSize:502 Checksum:6649 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.2.229

Mar 21 23:56:33.267: INFO: boom-server OK: did not receive any RST packet
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 21 23:56:33.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-6267" for this suite.

• [SLOW TEST:76.416 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":54,"completed":20,"skipped":3897,"failed":7,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
SSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 21 23:56:33.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4411.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4411.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4411.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4411.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4411.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4411.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 21 23:56:42.743: INFO: DNS probes using dns-4411/dns-test-1859c6a1-21e3-48e7-824c-de30eea73541 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 21 23:56:42.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4411" for this suite.

• [SLOW TEST:9.404 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":54,"completed":21,"skipped":3904,"failed":7,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
SSSSSS
------------------------------
[sig-network] DNS configMap nameserver Forward external name lookup 
  should forward externalname lookup to upstream nameserver [Slow][Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:341
[BeforeEach] Forward external name lookup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 21 23:56:42.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns-config-map
STEP: Waiting for a default service account to be provisioned in namespace
[It] should forward externalname lookup to upstream nameserver [Slow][Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:341
STEP: Finding a DNS pod
Mar 21 23:56:43.679: INFO: Using DNS pod: coredns-74ff55c5b-7rm8b
Mar 21 23:56:43.724: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-configmap-9cafb760-aa8a-4b9d-ad21-0a15a15130e8  dns-config-map-3240  7747a2f2-d936-4c3a-a07c-136b80737fdc 6970434 0 2021-03-21 23:56:43 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2021-03-21 23:56:43 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":10101,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xrwg2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrwg2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:10101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xrwg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 21 23:56:49.859: INFO: Created service &Service{ObjectMeta:{e2e-dns-configmap  dns-config-map-3240  abc45fe0-cbf1-4c27-9c9e-104f2d3672bd 6970571 0 2021-03-21 23:56:49 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2021-03-21 23:56:49 +0000 UTC FieldsV1 {"f:spec":{"f:ports":{".":{},"k:{\"port\":10101,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{".":{},"f:app":{}},"f:sessionAffinity":{},"f:type":{}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:10101,TargetPort:{0 10101 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: e2e-dns-configmap,},ClusterIP:10.96.203.229,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:nil,ClusterIPs:[10.96.203.229],IPFamilies:[],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
Mar 21 23:56:49.995: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-02357f75-2281-4051-902b-8682daf04b42  dns-config-map-3240  1609abd9-d002-4711-b90a-564557c90a72 6970579 0 2021-03-21 23:56:49 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2021-03-21 23:56:49 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-q9zjs,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:default-token-xrwg2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xrwg2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:default-token-xrwg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 21 23:56:56.886: INFO: ExecWithOptions {Command:[dig +short dns-externalname-upstream-test.dns-config-map-3240.svc.cluster.local] Namespace:dns-config-map-3240 PodName:e2e-dns-configmap-9cafb760-aa8a-4b9d-ad21-0a15a15130e8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:56:56.886: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:56:57.102: INFO: Running dig: [dig +short dns-externalname-upstream-test.dns-config-map-3240.svc.cluster.local], stdout: "dns.google.\n8.8.8.8\n8.8.4.4", stderr: "", err: 
STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 {
        health
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        forward . 10.244.1.12
    }] BinaryData:map[]}
Mar 21 23:56:58.362: INFO: ExecWithOptions {Command:[dig +short dns-externalname-upstream-local.dns-config-map-3240.svc.cluster.local] Namespace:dns-config-map-3240 PodName:e2e-dns-configmap-9cafb760-aa8a-4b9d-ad21-0a15a15130e8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:56:58.363: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:56:59.075: INFO: Running dig: [dig +short dns-externalname-upstream-local.dns-config-map-3240.svc.cluster.local], stdout: "foo.example.com.", stderr: "", err: 
Mar 21 23:57:00.076: INFO: ExecWithOptions {Command:[dig +short dns-externalname-upstream-local.dns-config-map-3240.svc.cluster.local] Namespace:dns-config-map-3240 PodName:e2e-dns-configmap-9cafb760-aa8a-4b9d-ad21-0a15a15130e8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:57:00.076: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:57:15.856: INFO: Running dig: [dig +short dns-externalname-upstream-local.dns-config-map-3240.svc.cluster.local], stdout: ";; connection timed out; no servers could be reached", stderr: "", err: command terminated with exit code 9
Mar 21 23:57:16.076: INFO: ExecWithOptions {Command:[dig +short dns-externalname-upstream-local.dns-config-map-3240.svc.cluster.local] Namespace:dns-config-map-3240 PodName:e2e-dns-configmap-9cafb760-aa8a-4b9d-ad21-0a15a15130e8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:57:16.076: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:57:16.200: INFO: Running dig: [dig +short dns-externalname-upstream-local.dns-config-map-3240.svc.cluster.local], stdout: "foo.example.com.\n192.0.2.123", stderr: "", err: 
STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 {
    errors
    health {
       lameduck 5s
    }
    ready
    kubernetes cluster.local in-addr.arpa ip6.arpa {
       pods insecure
       fallthrough in-addr.arpa ip6.arpa
       ttl 30
    }
    prometheus :9153
    forward . /etc/resolv.conf {
       max_concurrent 1000
    }
    cache 30
    loop
    reload
    loadbalance
}
] BinaryData:map[]}
STEP: deleting the test externalName service
STEP: Updating the ConfigMap (kube-system:coredns) to {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:coredns GenerateName: Namespace:kube-system SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp: DeletionGracePeriodSeconds: Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Immutable: Data:map[Corefile:.:53 {
    errors
    health {
       lameduck 5s
    }
    ready
    kubernetes cluster.local in-addr.arpa ip6.arpa {
       pods insecure
       fallthrough in-addr.arpa ip6.arpa
       ttl 30
    }
    prometheus :9153
    forward . /etc/resolv.conf {
       max_concurrent 1000
    }
    cache 30
    loop
    reload
    loadbalance
}
] BinaryData:map[]}
[AfterEach] Forward external name lookup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 21 23:57:25.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-config-map-3240" for this suite.

• [SLOW TEST:43.513 seconds]
[sig-network] DNS configMap nameserver
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Forward external name lookup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:338
    should forward externalname lookup to upstream nameserver [Slow][Serial]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:341
------------------------------
{"msg":"PASSED [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]","total":54,"completed":22,"skipped":3910,"failed":7,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 21 23:57:26.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91
Mar 21 23:57:27.854: INFO: (0) /api/v1/nodes/latest-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: http [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416
STEP: Performing setup for networking test in namespace nettest-7795
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 21 23:57:29.229: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 21 23:57:29.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:57:31.567: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:57:33.597: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:57:35.558: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:57:37.516: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:57:39.547: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:57:41.559: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:57:43.582: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:57:45.695: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:57:47.534: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:57:49.540: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:57:51.551: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:57:53.740: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 21 23:57:54.285: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 21 23:57:59.261: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 21 23:57:59.261: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 21 23:57:59.733: INFO: Service node-port-service in namespace nettest-7795 found.
Mar 21 23:58:00.436: INFO: Service session-affinity-service in namespace nettest-7795 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 21 23:58:01.552: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 21 23:58:02.649: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) test-container-pod --> 10.96.39.196:80
Mar 21 23:58:03.101: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.18:9080/dial?request=hostName&protocol=http&host=10.96.39.196&port=80&tries=1'] Namespace:nettest-7795 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:58:03.101: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:58:03.220: INFO: Tries: 10, in try: 0, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7795, hostIp: 172.18.0.13, podIp: 10.244.1.18, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:54 +0000 UTC  }]" }
Mar 21 23:58:05.318: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.18:9080/dial?request=hostName&protocol=http&host=10.96.39.196&port=80&tries=1'] Namespace:nettest-7795 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:58:05.318: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:58:05.535: INFO: Tries: 10, in try: 1, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7795, hostIp: 172.18.0.13, podIp: 10.244.1.18, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:54 +0000 UTC  }]" }
Mar 21 23:58:07.658: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.18:9080/dial?request=hostName&protocol=http&host=10.96.39.196&port=80&tries=1'] Namespace:nettest-7795 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:58:07.658: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:58:07.881: INFO: Tries: 10, in try: 2, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7795, hostIp: 172.18.0.13, podIp: 10.244.1.18, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:54 +0000 UTC  }]" }
Mar 21 23:58:09.892: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.18:9080/dial?request=hostName&protocol=http&host=10.96.39.196&port=80&tries=1'] Namespace:nettest-7795 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:58:09.892: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:58:10.039: INFO: Tries: 10, in try: 3, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7795, hostIp: 172.18.0.13, podIp: 10.244.1.18, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:54 +0000 UTC  }]" }
Mar 21 23:58:12.097: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.18:9080/dial?request=hostName&protocol=http&host=10.96.39.196&port=80&tries=1'] Namespace:nettest-7795 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:58:12.097: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:58:12.225: INFO: Tries: 10, in try: 4, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7795, hostIp: 172.18.0.13, podIp: 10.244.1.18, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:54 +0000 UTC  }]" }
Mar 21 23:58:14.229: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.18:9080/dial?request=hostName&protocol=http&host=10.96.39.196&port=80&tries=1'] Namespace:nettest-7795 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:58:14.229: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:58:14.341: INFO: Tries: 10, in try: 5, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7795, hostIp: 172.18.0.13, podIp: 10.244.1.18, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:54 +0000 UTC  }]" }
Mar 21 23:58:16.435: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.18:9080/dial?request=hostName&protocol=http&host=10.96.39.196&port=80&tries=1'] Namespace:nettest-7795 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:58:16.435: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:58:16.584: INFO: Tries: 10, in try: 6, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7795, hostIp: 172.18.0.13, podIp: 10.244.1.18, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:54 +0000 UTC  }]" }
Mar 21 23:58:18.605: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.18:9080/dial?request=hostName&protocol=http&host=10.96.39.196&port=80&tries=1'] Namespace:nettest-7795 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:58:18.605: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:58:18.779: INFO: Tries: 10, in try: 7, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7795, hostIp: 172.18.0.13, podIp: 10.244.1.18, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:54 +0000 UTC  }]" }
Mar 21 23:58:20.795: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.18:9080/dial?request=hostName&protocol=http&host=10.96.39.196&port=80&tries=1'] Namespace:nettest-7795 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:58:20.795: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:58:20.954: INFO: Tries: 10, in try: 8, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7795, hostIp: 172.18.0.13, podIp: 10.244.1.18, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:54 +0000 UTC  }]" }
Mar 21 23:58:23.499: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.18:9080/dial?request=hostName&protocol=http&host=10.96.39.196&port=80&tries=1'] Namespace:nettest-7795 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:58:23.499: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:58:23.884: INFO: Tries: 10, in try: 9, stdout: {"responses":["netserver-1"]}, stderr: , command run in Pod { "name: test-container-pod, namespace: nettest-7795, hostIp: 172.18.0.13, podIp: 10.244.1.18, conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:57:54 +0000 UTC  }]" }
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 21 23:58:25.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7795" for this suite.

• [SLOW TEST:57.081 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: http [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]","total":54,"completed":24,"skipped":4036,"failed":7,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking 
  should provide unchanging, static URL paths for kubernetes api services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 21 23:58:25.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112
STEP: testing: /healthz
STEP: testing: /api
STEP: testing: /apis
STEP: testing: /metrics
STEP: testing: /openapi/v2
STEP: testing: /version
STEP: testing: /logs
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 21 23:58:27.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-8997" for this suite.
•{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":54,"completed":25,"skipped":4054,"failed":7,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should support basic nodePort: udp functionality
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 21 23:58:27.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should support basic nodePort: udp functionality
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387
STEP: Performing setup for networking test in namespace nettest-130
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 21 23:58:27.941: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 21 23:58:28.606: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:58:31.062: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:58:33.170: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:58:34.651: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:58:36.620: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:58:39.428: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:58:40.913: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:58:42.685: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:58:44.642: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:58:46.635: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 21 23:58:46.694: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 21 23:58:51.045: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 21 23:58:51.045: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 21 23:58:51.253: INFO: Service node-port-service in namespace nettest-130 found.
Mar 21 23:58:51.546: INFO: Service session-affinity-service in namespace nettest-130 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 21 23:58:52.598: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 21 23:58:53.624: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) 172.18.0.9 (node) --> 172.18.0.9:30953 (nodeIP) and getting ALL host endpoints
Mar 21 23:58:53.691: INFO: Going to poll 172.18.0.9 on port 30953 at least 0 times, with a maximum of 34 tries before failing
Mar 21 23:58:53.699: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 30953 | grep -v '^\s*$'] Namespace:nettest-130 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:58:53.699: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:58:54.812: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0])
Mar 21 23:58:56.913: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 30953 | grep -v '^\s*$'] Namespace:nettest-130 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:58:56.913: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:58:58.077: INFO: Waiting for [netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-0])
Mar 21 23:59:00.092: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 30953 | grep -v '^\s*$'] Namespace:nettest-130 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:59:00.092: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:59:01.239: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 21 23:59:01.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-130" for this suite.

• [SLOW TEST:33.681 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should support basic nodePort: udp functionality
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality","total":54,"completed":26,"skipped":4126,"failed":7,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 21 23:59:01.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
STEP: Performing setup for networking test in namespace nettest-5332
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 21 23:59:01.666: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 21 23:59:01.820: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:59:04.281: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:59:05.872: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 21 23:59:07.855: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:59:09.955: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:59:11.871: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:59:13.949: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:59:15.880: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:59:17.823: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:59:20.281: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 21 23:59:21.868: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 21 23:59:21.990: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 21 23:59:24.111: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 21 23:59:30.893: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 21 23:59:30.893: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 21 23:59:31.302: INFO: Service node-port-service in namespace nettest-5332 found.
Mar 21 23:59:31.744: INFO: Service session-affinity-service in namespace nettest-5332 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 21 23:59:32.789: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 21 23:59:33.866: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) 172.18.0.9 (node) --> 172.18.0.9:32506 (nodeIP) and getting ALL host endpoints
Mar 21 23:59:33.880: INFO: Going to poll 172.18.0.9 on port 32506 at least 0 times, with a maximum of 34 tries before failing
Mar 21 23:59:33.925: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:59:33.925: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:59:34.073: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-1])
Mar 21 23:59:36.110: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:59:36.110: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:59:36.265: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1]
STEP: Deleting the node port access point
STEP: dialing(http) 172.18.0.9 (node) --> 172.18.0.9:32506 (nodeIP) and getting ZERO host endpoints
Mar 21 23:59:51.576: INFO: Going to poll 172.18.0.9 on port 32506 at least 34 times, with a maximum of 34 tries before failing
Mar 21 23:59:51.614: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:59:51.614: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:59:51.792: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 21 23:59:51.792: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 21 23:59:53.901: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:59:53.901: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:59:54.007: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 21 23:59:54.007: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 21 23:59:56.043: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:59:56.043: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:59:56.206: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 21 23:59:56.206: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 21 23:59:58.404: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 21 23:59:58.404: INFO: >>> kubeConfig: /root/.kube/config
Mar 21 23:59:58.916: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 21 23:59:58.916: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:01.040: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:01.040: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:01.380: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:01.380: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:03.438: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:03.438: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:03.567: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:03.567: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:05.652: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:05.652: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:06.094: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:06.094: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:08.285: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:08.285: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:08.514: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:08.514: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:10.573: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:10.573: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:10.801: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:10.801: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:12.843: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:12.843: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:13.104: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:13.104: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:15.391: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:15.391: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:15.683: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:15.683: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:18.044: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:18.044: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:18.458: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:18.458: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:20.525: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:20.525: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:20.943: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:20.943: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:22.961: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:22.961: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:23.171: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:23.171: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:25.729: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:25.729: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:26.729: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:26.729: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:28.813: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:28.813: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:29.040: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:29.040: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:31.113: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:31.113: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:31.295: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:31.295: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:33.306: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:33.306: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:33.416: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:33.416: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:35.451: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:35.451: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:35.625: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:35.625: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:37.640: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:37.640: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:37.878: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:37.878: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:39.926: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:39.926: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:40.059: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:40.059: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:42.081: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:42.081: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:42.181: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:42.181: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:44.219: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:44.219: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:44.389: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:44.389: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:46.460: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:46.460: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:46.595: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:46.595: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:48.607: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:48.607: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:48.744: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:48.744: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:50.798: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:50.798: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:50.960: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:50.960: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:53.006: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:53.006: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:53.148: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:53.148: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:55.166: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:55.166: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:55.412: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:55.413: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:00:57.547: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:00:57.547: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:00:58.297: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:00:58.297: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:01:01.503: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:01:01.503: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:01:02.063: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:01:02.063: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:01:04.292: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:01:04.292: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:01:05.880: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:01:05.880: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:01:08.225: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:01:08.225: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:01:08.966: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:01:08.966: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:01:11.020: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:01:11.020: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:01:11.134: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:01:11.134: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:01:13.267: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\s*$'] Namespace:nettest-5332 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:01:13.267: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:01:13.859: INFO: Failed to execute "curl -g -q -s --max-time 15 --connect-timeout 1 http://172.18.0.9:32506/hostName | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:01:13.859: INFO: Found all 0 expected endpoints: []
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 22 00:01:13.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5332" for this suite.

• [SLOW TEST:133.312 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: http [Slow]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]","total":54,"completed":27,"skipped":4183,"failed":7,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 22 00:01:14.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
Mar 22 00:01:18.633: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:01:21.353: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:01:22.663: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:01:24.724: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Mar 22 00:01:24.765: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Mar 22 00:01:32.624: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n"
Mar 22 00:01:32.624: INFO: stdout: "iptables"
Mar 22 00:01:32.624: INFO: proxyMode: iptables
Mar 22 00:01:33.767: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Mar 22 00:01:33.974: INFO: Pod kube-proxy-mode-detector no longer exists
STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-4742
Mar 22 00:01:34.067: INFO: sourceip-test cluster ip: 10.96.71.53
STEP: Picking 2 Nodes to test whether source IP is preserved or not
STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip
Mar 22 00:01:34.655: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:01:36.882: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:01:38.843: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:01:40.680: INFO: The status of Pod echo-sourceip is Running (Ready = true)
STEP: waiting up to 3m0s for service sourceip-test in namespace services-4742 to expose endpoints map[echo-sourceip:[8080]]
Mar 22 00:01:40.695: INFO: successfully validated that service sourceip-test in namespace services-4742 exposes endpoints map[echo-sourceip:[8080]]
STEP: Creating pause pod deployment
Mar 22 00:01:40.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Mar 22 00:01:43.256: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968101, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:01:44.895: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968101, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:01:46.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968101, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:01:49.450: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:01:51.346: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:01:53.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:01:54.823: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:01:57.281: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:01:58.822: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:00.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:02.879: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:04.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:06.825: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:08.807: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:10.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:12.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:14.829: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:16.790: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:18.841: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:20.802: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:22.779: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:24.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:26.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:28.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:30.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:32.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:34.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:36.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:38.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:40.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:42.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:44.842: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:46.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:48.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:50.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:52.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:54.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:56.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:02:59.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:03:01.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:03:04.256: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:03:05.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968107, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:03:07.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968184, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968100, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-6d56d7cdf5\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 22 00:03:08.919: INFO: Waiting up to 2m0s to get response from 10.96.71.53:8080
Mar 22 00:03:08.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:03:10.214: INFO: rc: 7
Mar 22 00:03:10.214: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:03:12.214: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:03:13.703: INFO: rc: 7
Mar 22 00:03:13.704: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:03:15.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:03:17.353: INFO: rc: 7
Mar 22 00:03:17.353: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:03:19.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:03:20.966: INFO: rc: 7
Mar 22 00:03:20.966: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:03:22.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:03:24.263: INFO: rc: 7
Mar 22 00:03:24.263: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:03:26.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:03:27.625: INFO: rc: 7
Mar 22 00:03:27.625: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:03:29.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:03:31.272: INFO: rc: 7
Mar 22 00:03:31.272: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:03:33.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:03:34.793: INFO: rc: 7
Mar 22 00:03:34.793: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:03:36.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:03:38.312: INFO: rc: 7
Mar 22 00:03:38.312: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:03:40.312: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:03:41.611: INFO: rc: 7
Mar 22 00:03:41.611: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:03:43.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:03:44.969: INFO: rc: 7
Mar 22 00:03:44.969: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:03:46.971: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:03:48.617: INFO: rc: 7
Mar 22 00:03:48.617: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:03:50.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:03:52.169: INFO: rc: 7
Mar 22 00:03:52.169: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:03:54.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:03:55.463: INFO: rc: 7
Mar 22 00:03:55.463: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:03:57.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:03:58.791: INFO: rc: 7
Mar 22 00:03:58.791: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:00.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:02.085: INFO: rc: 7
Mar 22 00:04:02.085: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:04.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:05.354: INFO: rc: 7
Mar 22 00:04:05.354: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:07.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:08.652: INFO: rc: 7
Mar 22 00:04:08.652: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:10.653: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:11.912: INFO: rc: 7
Mar 22 00:04:11.912: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:13.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:15.176: INFO: rc: 7
Mar 22 00:04:15.176: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:17.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:18.440: INFO: rc: 7
Mar 22 00:04:18.440: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:20.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:21.767: INFO: rc: 7
Mar 22 00:04:21.767: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:23.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:25.124: INFO: rc: 7
Mar 22 00:04:25.124: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:27.125: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:28.522: INFO: rc: 7
Mar 22 00:04:28.522: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:30.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:31.943: INFO: rc: 7
Mar 22 00:04:31.943: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:33.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:35.365: INFO: rc: 7
Mar 22 00:04:35.365: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:37.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:38.789: INFO: rc: 7
Mar 22 00:04:38.789: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:40.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:42.115: INFO: rc: 7
Mar 22 00:04:42.115: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:44.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:45.417: INFO: rc: 7
Mar 22 00:04:45.417: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:47.418: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:48.743: INFO: rc: 7
Mar 22 00:04:48.743: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:50.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:52.300: INFO: rc: 7
Mar 22 00:04:52.300: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:54.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:55.562: INFO: rc: 7
Mar 22 00:04:55.562: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:04:57.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:04:58.824: INFO: rc: 7
Mar 22 00:04:58.824: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:05:00.825: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:05:02.120: INFO: rc: 7
Mar 22 00:05:02.120: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:05:04.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:05:05.415: INFO: rc: 7
Mar 22 00:05:05.415: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:05:07.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip'
Mar 22 00:05:08.682: INFO: rc: 7
Mar 22 00:05:08.682: INFO: got err: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Mar 22 00:05:10.682: FAIL: Unexpected error:
    : {
        Err: {
            s: "error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:\nCommand stdout:\n\nstderr:\n+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip\ncommand terminated with exit code 7\n\nerror:\nexit status 7",
        },
        Code: 7,
    }
    error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
    Command stdout:
    
    stderr:
    + curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
    command terminated with exit code 7
    
    error:
    exit status 7
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.execSourceIPTest(0x0, 0x0, 0x0, 0x0, 0xc0037301c0, 0x1a, 0xc0056f6120, 0x15, 0xc005880370, 0xd, ...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/util.go:133 +0x4d9
k8s.io/kubernetes/test/e2e/network.glob..func24.6()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:980 +0x1014
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00386cd80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc00386cd80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc00386cd80, 0x6d60740)
	/usr/local/go/src/testing/testing.go:1194 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1239 +0x2b3
Mar 22 00:05:10.683: INFO: Deleting deployment
Mar 22 00:05:10.787: INFO: Cleaning up the echo server pod
Mar 22 00:05:10.803: FAIL: failed to delete pod: echo-sourceip on node
Unexpected error:
    <*errors.StatusError | 0xc0026cc280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "pods \"echo-sourceip\" not found",
            Reason: "NotFound",
            Details: {
                Name: "echo-sourceip",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "echo-sourceip" not found
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.6.2(0xc00034ec10, 0xc0035f59b0, 0xd, 0x6b6ecfd, 0xd)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:952 +0x1ff
panic(0x6714bc0, 0xc003866240)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00162f000, 0x3b6, 0x82e5dca, 0x65, 0x85, 0xc001b92380, 0x345)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x5ea69e0, 0x72180e0)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00162f000, 0x3b6, 0xc0009523d0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Fail(0xc00162e000, 0x3a1, 0xc0006288e0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc000952568, 0x7345b18, 0x99518a8, 0x0, 0x0, 0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f3
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc000952568, 0x7345b18, 0x99518a8, 0x0, 0x0, 0x0, 0x2028eba)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x72e8800, 0xc0056e8078, 0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:46 +0xe7
k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:40
k8s.io/kubernetes/test/e2e/network.execSourceIPTest(0x0, 0x0, 0x0, 0x0, 0xc0037301c0, 0x1a, 0xc0056f6120, 0x15, 0xc005880370, 0xd, ...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/util.go:133 +0x4d9
k8s.io/kubernetes/test/e2e/network.glob..func24.6()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:980 +0x1014
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00386cd80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc00386cd80)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc00386cd80, 0x6d60740)
	/usr/local/go/src/testing/testing.go:1194 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1239 +0x2b3
Mar 22 00:05:10.804: INFO: Cleaning up the sourceip test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-4742".
STEP: Found 39 events.
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:18 +0000 UTC - event for kube-proxy-mode-detector: {default-scheduler } Scheduled: Successfully assigned services-4742/kube-proxy-mode-detector to latest-worker2
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:20 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:22 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker2} Created: Created container agnhost-container
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:23 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker2} Started: Started container agnhost-container
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:33 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker2} Killing: Stopping container agnhost-container
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:34 +0000 UTC - event for echo-sourceip: {default-scheduler } Scheduled: Successfully assigned services-4742/echo-sourceip to latest-worker2
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:36 +0000 UTC - event for echo-sourceip: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:37 +0000 UTC - event for echo-sourceip: {kubelet latest-worker2} Created: Created container agnhost-container
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:38 +0000 UTC - event for echo-sourceip: {kubelet latest-worker2} Started: Started container agnhost-container
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:40 +0000 UTC - event for pause-pod: {deployment-controller } ScalingReplicaSet: Scaled up replica set pause-pod-6d56d7cdf5 to 2
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:40 +0000 UTC - event for pause-pod-6d56d7cdf5: {replicaset-controller } SuccessfulCreate: Created pod: pause-pod-6d56d7cdf5-r96fn
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:40 +0000 UTC - event for pause-pod-6d56d7cdf5: {replicaset-controller } SuccessfulCreate: Created pod: pause-pod-6d56d7cdf5-6bvc9
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:40 +0000 UTC - event for pause-pod-6d56d7cdf5-6bvc9: {default-scheduler } Scheduled: Successfully assigned services-4742/pause-pod-6d56d7cdf5-6bvc9 to latest-worker
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:40 +0000 UTC - event for pause-pod-6d56d7cdf5-r96fn: {default-scheduler } Scheduled: Successfully assigned services-4742/pause-pod-6d56d7cdf5-r96fn to latest-worker2
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:42 +0000 UTC - event for pause-pod-6d56d7cdf5-6bvc9: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:44 +0000 UTC - event for pause-pod-6d56d7cdf5-6bvc9: {kubelet latest-worker} Created: Created container agnhost-pause
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:44 +0000 UTC - event for pause-pod-6d56d7cdf5-r96fn: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:45 +0000 UTC - event for echo-sourceip: {taint-controller } TaintManagerEviction: Marking for deletion Pod services-4742/echo-sourceip
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:45 +0000 UTC - event for pause-pod-6d56d7cdf5-6bvc9: {kubelet latest-worker} Started: Started container agnhost-pause
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:45 +0000 UTC - event for pause-pod-6d56d7cdf5-r96fn: {kubelet latest-worker2} Created: Created container agnhost-pause
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:45 +0000 UTC - event for pause-pod-6d56d7cdf5-r96fn: {taint-controller } TaintManagerEviction: Marking for deletion Pod services-4742/pause-pod-6d56d7cdf5-r96fn
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:46 +0000 UTC - event for echo-sourceip: {kubelet latest-worker2} Killing: Stopping container agnhost-container
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:46 +0000 UTC - event for echo-sourceip: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod services-4742/echo-sourceip
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:46 +0000 UTC - event for pause-pod-6d56d7cdf5: {replicaset-controller } SuccessfulCreate: Created pod: pause-pod-6d56d7cdf5-phkln
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:46 +0000 UTC - event for pause-pod-6d56d7cdf5-6bvc9: {taint-controller } TaintManagerEviction: Marking for deletion Pod services-4742/pause-pod-6d56d7cdf5-6bvc9
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:46 +0000 UTC - event for pause-pod-6d56d7cdf5-phkln: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {kubernetes.io/e2e-evict-taint-key: evictTaintVal}, that the pod didn't tolerate.
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:46 +0000 UTC - event for pause-pod-6d56d7cdf5-r96fn: {kubelet latest-worker2} Started: Started container agnhost-pause
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:46 +0000 UTC - event for pause-pod-6d56d7cdf5-r96fn: {kubelet latest-worker2} Killing: Stopping container agnhost-pause
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:47 +0000 UTC - event for pause-pod-6d56d7cdf5: {replicaset-controller } SuccessfulCreate: Created pod: pause-pod-6d56d7cdf5-zqzsw
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:47 +0000 UTC - event for pause-pod-6d56d7cdf5-6bvc9: {kubelet latest-worker} Killing: Stopping container agnhost-pause
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:01:47 +0000 UTC - event for pause-pod-6d56d7cdf5-zqzsw: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {kubernetes.io/e2e-evict-taint-key: evictTaintVal}, that the pod didn't tolerate.
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:02:53 +0000 UTC - event for pause-pod-6d56d7cdf5-zqzsw: {default-scheduler } Scheduled: Successfully assigned services-4742/pause-pod-6d56d7cdf5-zqzsw to latest-worker2
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:02:55 +0000 UTC - event for pause-pod-6d56d7cdf5-zqzsw: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:03:01 +0000 UTC - event for pause-pod-6d56d7cdf5-phkln: {default-scheduler } Scheduled: Successfully assigned services-4742/pause-pod-6d56d7cdf5-phkln to latest-worker
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:03:01 +0000 UTC - event for pause-pod-6d56d7cdf5-zqzsw: {kubelet latest-worker2} Created: Created container agnhost-pause
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:03:02 +0000 UTC - event for pause-pod-6d56d7cdf5-zqzsw: {kubelet latest-worker2} Started: Started container agnhost-pause
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:03:04 +0000 UTC - event for pause-pod-6d56d7cdf5-phkln: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:03:07 +0000 UTC - event for pause-pod-6d56d7cdf5-phkln: {kubelet latest-worker} Created: Created container agnhost-pause
Mar 22 00:05:11.022: INFO: At 2021-03-22 00:03:08 +0000 UTC - event for pause-pod-6d56d7cdf5-phkln: {kubelet latest-worker} Started: Started container agnhost-pause
Mar 22 00:05:11.057: INFO: POD                         NODE            PHASE    GRACE  CONDITIONS
Mar 22 00:05:11.057: INFO: pause-pod-6d56d7cdf5-phkln  latest-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:03:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:03:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:03:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:03:01 +0000 UTC  }]
Mar 22 00:05:11.057: INFO: pause-pod-6d56d7cdf5-zqzsw  latest-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:02:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:03:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:03:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:02:53 +0000 UTC  }]
Mar 22 00:05:11.057: INFO: 
Mar 22 00:05:11.093: INFO: 
Logging node info for node latest-control-plane
Mar 22 00:05:11.114: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane    490b9532-4cb6-4803-8805-500c50bef538 6982636 0 2021-02-19 10:11:38 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:04:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:04:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:04:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:04:33 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 22 00:05:11.115: INFO: 
Logging kubelet events for node latest-control-plane
Mar 22 00:05:11.192: INFO: 
Logging pods the kubelet thinks is on node latest-control-plane
Mar 22 00:05:11.303: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:11.303: INFO: 	Container coredns ready: true, restart count 0
Mar 22 00:05:11.303: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:11.303: INFO: 	Container etcd ready: true, restart count 0
Mar 22 00:05:11.303: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:11.303: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 22 00:05:11.303: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:11.303: INFO: 	Container kube-apiserver ready: true, restart count 0
Mar 22 00:05:11.303: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:11.303: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 22 00:05:11.303: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:11.303: INFO: 	Container coredns ready: true, restart count 0
Mar 22 00:05:11.303: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:11.303: INFO: 	Container local-path-provisioner ready: true, restart count 0
Mar 22 00:05:11.303: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:11.303: INFO: 	Container kube-controller-manager ready: true, restart count 0
Mar 22 00:05:11.303: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:11.303: INFO: 	Container kube-scheduler ready: true, restart count 0
W0322 00:05:11.349866       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 22 00:05:11.517: INFO: 
Latency metrics for node latest-control-plane
Mar 22 00:05:11.517: INFO: 
Logging node info for node latest-worker
Mar 22 00:05:11.570: INFO: Node Info: &Node{ObjectMeta:{latest-worker    52cd6d4b-d53f-435d-801a-04c2822dec44 6983272 0 2021-02-19 10:12:05 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:45:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 22 00:05:11.571: INFO: 
Logging kubelet events for node latest-worker
Mar 22 00:05:11.596: INFO: 
Logging pods the kubelet thinks is on node latest-worker
Mar 22 00:05:12.197: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:12.197: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 22 00:05:12.197: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:12.197: INFO: 	Container kindnet-cni ready: true, restart count 0
Mar 22 00:05:12.197: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:12.197: INFO: 	Container chaos-mesh ready: true, restart count 0
Mar 22 00:05:12.197: INFO: csi-mockplugin-0 started at 2021-03-22 00:03:31 +0000 UTC (0+3 container statuses recorded)
Mar 22 00:05:12.197: INFO: 	Container csi-provisioner ready: false, restart count 0
Mar 22 00:05:12.197: INFO: 	Container driver-registrar ready: false, restart count 0
Mar 22 00:05:12.197: INFO: 	Container mock ready: false, restart count 0
Mar 22 00:05:12.197: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:12.197: INFO: 	Container chaos-daemon ready: true, restart count 0
W0322 00:05:12.347808       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 22 00:05:13.149: INFO: 
Latency metrics for node latest-worker
Mar 22 00:05:13.149: INFO: 
Logging node info for node latest-worker2
Mar 22 00:05:13.194: INFO: Node Info: &Node{ObjectMeta:{latest-worker2    7d2a1377-0c6f-45fb-899e-6c307ecb1803 6981103 0 2021-02-19 10:12:05 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:58:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:58:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Mar 22 00:05:13.196: INFO: 
Logging kubelet events for node latest-worker2
Mar 22 00:05:13.223: INFO: 
Logging pods the kubelet thinks is on node latest-worker2
Mar 22 00:05:13.256: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:13.256: INFO: 	Container chaos-daemon ready: true, restart count 0
Mar 22 00:05:13.256: INFO: back-off-cap started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:13.256: INFO: 	Container back-off-cap ready: false, restart count 4
Mar 22 00:05:13.256: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:13.256: INFO: 	Container kube-proxy ready: true, restart count 0
Mar 22 00:05:13.256: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded)
Mar 22 00:05:13.256: INFO: 	Container kindnet-cni ready: true, restart count 0
W0322 00:05:13.360322       7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Mar 22 00:05:13.646: INFO: 
Latency metrics for node latest-worker2
Mar 22 00:05:13.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4742" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

• Failure [239.080 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903

  Mar 22 00:05:10.682: Unexpected error:
      : {
          Err: {
              s: "error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:\nCommand stdout:\n\nstderr:\n+ curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip\ncommand terminated with exit code 7\n\nerror:\nexit status 7",
          },
          Code: 7,
      }
      error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4742 exec pause-pod-6d56d7cdf5-phkln -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip:
      Command stdout:
      
      stderr:
      + curl -q -s --connect-timeout 30 10.96.71.53:8080/clientip
      command terminated with exit code 7
      
      error:
      exit status 7
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/util.go:133
------------------------------
{"msg":"FAILED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":54,"completed":27,"skipped":4278,"failed":8,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] NetworkPolicy API 
  should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196
[BeforeEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 22 00:05:13.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename networkpolicies
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Mar 22 00:05:14.043: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Mar 22 00:05:14.062: INFO: starting watch
STEP: patching
STEP: updating
Mar 22 00:05:14.145: INFO: waiting for watch events with expected annotations
Mar 22 00:05:14.145: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Mar 22 00:05:14.145: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 22 00:05:14.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-9269" for this suite.
•{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":54,"completed":28,"skipped":4367,"failed":8,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should function for endpoint-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 22 00:05:14.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242
STEP: Performing setup for networking test in namespace nettest-1764
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 22 00:05:14.931: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 22 00:05:15.152: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:05:18.008: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:05:19.723: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:05:21.417: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:05:23.160: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:05:25.189: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:05:27.459: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:05:29.162: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:05:31.178: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:05:33.283: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:05:35.181: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 22 00:05:35.228: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 22 00:05:41.500: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 22 00:05:41.500: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 22 00:05:42.889: INFO: Service node-port-service in namespace nettest-1764 found.
Mar 22 00:05:43.978: INFO: Service session-affinity-service in namespace nettest-1764 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 22 00:05:45.012: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 22 00:05:46.208: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) netserver-0 (endpoint) --> 10.96.100.159:80 (config.clusterIP)
Mar 22 00:05:46.493: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.16:8080/dial?request=hostname&protocol=http&host=10.96.100.159&port=80&tries=1'] Namespace:nettest-1764 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:05:46.493: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:05:46.721: INFO: Waiting for responses: map[netserver-1:{}]
Mar 22 00:05:48.746: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.16:8080/dial?request=hostname&protocol=http&host=10.96.100.159&port=80&tries=1'] Namespace:nettest-1764 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:05:48.746: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:05:48.901: INFO: Waiting for responses: map[netserver-1:{}]
Mar 22 00:05:50.927: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.16:8080/dial?request=hostname&protocol=http&host=10.96.100.159&port=80&tries=1'] Namespace:nettest-1764 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:05:50.927: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:05:51.092: INFO: Waiting for responses: map[]
Mar 22 00:05:51.092: INFO: reached 10.96.100.159 after 2/34 tries
STEP: dialing(http) netserver-0 (endpoint) --> 172.18.0.9:31493 (nodeIP)
Mar 22 00:05:51.134: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.16:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=31493&tries=1'] Namespace:nettest-1764 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:05:51.134: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:05:51.279: INFO: Waiting for responses: map[netserver-0:{}]
Mar 22 00:05:53.324: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.16:8080/dial?request=hostname&protocol=http&host=172.18.0.9&port=31493&tries=1'] Namespace:nettest-1764 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:05:53.324: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:05:53.521: INFO: Waiting for responses: map[]
Mar 22 00:05:53.521: INFO: reached 172.18.0.9 after 1/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 22 00:05:53.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1764" for this suite.

• [SLOW TEST:38.955 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: http
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http","total":54,"completed":29,"skipped":4487,"failed":8,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should complete a service status lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2212
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 22 00:05:53.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should complete a service status lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2212
STEP: creating a Service
STEP: watching for the Service to be added
Mar 22 00:05:53.849: INFO: Found Service test-service-g86ns in namespace services-624 with labels: map[test-service-static:true] & ports [{http TCP  80 {0 80 } 0}]
Mar 22 00:05:53.849: INFO: Service test-service-g86ns created
STEP: Getting /status
Mar 22 00:05:53.877: INFO: Service test-service-g86ns has LoadBalancer: {[]}
STEP: patching the ServiceStatus
STEP: watching for the Service to be patched
Mar 22 00:05:53.897: INFO: observed Service test-service-g86ns in namespace services-624 with annotations: map[] & LoadBalancer: {[]}
Mar 22 00:05:53.897: INFO: Found Service test-service-g86ns in namespace services-624 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1  []}]}
Mar 22 00:05:53.897: INFO: Service test-service-g86ns has service status patched
STEP: updating the ServiceStatus
Mar 22 00:05:53.994: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}}
STEP: watching for the Service to be updated
Mar 22 00:05:53.996: INFO: Observed Service test-service-g86ns in namespace services-624 with annotations: map[] & Conditions: {[]}
Mar 22 00:05:53.996: INFO: Observed event: &Service{ObjectMeta:{test-service-g86ns  services-624  80c7cfc8-ffa6-408e-8dac-033b446c7244 6985120 0 2021-03-22 00:05:53 +0000 UTC   map[test-service-static:true] map[patchedstatus:true] [] []  [{e2e.test Update v1 2021-03-22 00:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.96.178.136,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:nil,ClusterIPs:[10.96.178.136],IPFamilies:[],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},}
Mar 22 00:05:53.996: INFO: Found Service test-service-g86ns in namespace services-624 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}]
Mar 22 00:05:53.996: INFO: Service test-service-g86ns has service status updated
STEP: patching the service
STEP: watching for the Service to be patched
Mar 22 00:05:54.172: INFO: observed Service test-service-g86ns in namespace services-624 with labels: map[test-service-static:true]
Mar 22 00:05:54.172: INFO: observed Service test-service-g86ns in namespace services-624 with labels: map[test-service-static:true]
Mar 22 00:05:54.172: INFO: observed Service test-service-g86ns in namespace services-624 with labels: map[test-service-static:true]
Mar 22 00:05:54.172: INFO: Found Service test-service-g86ns in namespace services-624 with labels: map[test-service:patched test-service-static:true]
Mar 22 00:05:54.172: INFO: Service test-service-g86ns patched
STEP: deleting the service
STEP: watching for the Service to be deleted
Mar 22 00:05:54.426: INFO: Observed event: ADDED
Mar 22 00:05:54.426: INFO: Observed event: MODIFIED
Mar 22 00:05:54.426: INFO: Observed event: MODIFIED
Mar 22 00:05:54.426: INFO: Observed event: MODIFIED
Mar 22 00:05:54.426: INFO: Found Service test-service-g86ns in namespace services-624 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true]
Mar 22 00:05:54.426: INFO: Service test-service-g86ns deleted
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 22 00:05:54.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-624" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
•{"msg":"PASSED [sig-network] Services should complete a service status lifecycle","total":54,"completed":30,"skipped":4508,"failed":8,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should update endpoints: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 22 00:05:54.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
STEP: Performing setup for networking test in namespace nettest-2800
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 22 00:05:54.997: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 22 00:05:56.057: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:05:58.247: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:06:00.152: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:06:02.116: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:06:04.249: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:06:06.087: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:06:08.093: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:06:10.062: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:06:12.092: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:06:14.110: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:06:16.084: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:06:18.077: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 22 00:06:18.127: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 22 00:06:24.307: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 22 00:06:24.307: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 22 00:06:24.577: INFO: Service node-port-service in namespace nettest-2800 found.
Mar 22 00:06:25.122: INFO: Service session-affinity-service in namespace nettest-2800 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 22 00:06:26.152: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 22 00:06:27.157: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) test-container-pod --> 10.96.177.229:80 (config.clusterIP)
Mar 22 00:06:27.412: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:06:27.412: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:06:27.549: INFO: Waiting for responses: map[netserver-0:{}]
Mar 22 00:06:29.717: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:06:29.717: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:06:30.051: INFO: Waiting for responses: map[netserver-0:{}]
Mar 22 00:06:32.153: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:06:32.153: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:06:32.325: INFO: Waiting for responses: map[]
Mar 22 00:06:32.325: INFO: reached 10.96.177.229 after 2/34 tries
STEP: Deleting a pod which, will be replaced with a new endpoint
Mar 22 00:06:32.652: INFO: Waiting for pod netserver-0 to disappear
Mar 22 00:06:32.693: INFO: Pod netserver-0 no longer exists
Mar 22 00:06:33.694: INFO: Waiting for amount of service:node-port-service endpoints to be 1
STEP: dialing(http) test-container-pod --> 10.96.177.229:80 (config.clusterIP) (endpoint recovery)
Mar 22 00:06:38.721: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:06:38.721: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:06:38.863: INFO: Waiting for responses: map[]
Mar 22 00:06:40.877: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:06:40.877: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:06:40.998: INFO: Waiting for responses: map[]
Mar 22 00:06:43.003: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:06:43.003: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:06:43.113: INFO: Waiting for responses: map[]
Mar 22 00:06:45.119: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:06:45.119: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:06:45.246: INFO: Waiting for responses: map[]
Mar 22 00:06:47.251: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:06:47.251: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:06:47.395: INFO: Waiting for responses: map[]
Mar 22 00:06:49.399: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:06:49.399: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:06:49.510: INFO: Waiting for responses: map[]
Mar 22 00:06:51.514: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:06:51.514: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:06:51.627: INFO: Waiting for responses: map[]
Mar 22 00:06:53.633: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:06:53.633: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:06:53.752: INFO: Waiting for responses: map[]
Mar 22 00:06:55.756: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:06:55.756: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:06:55.877: INFO: Waiting for responses: map[]
Mar 22 00:06:57.881: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:06:57.881: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:06:57.991: INFO: Waiting for responses: map[]
Mar 22 00:06:59.995: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:06:59.995: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:00.119: INFO: Waiting for responses: map[]
Mar 22 00:07:02.123: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:02.123: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:02.210: INFO: Waiting for responses: map[]
Mar 22 00:07:04.215: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:04.215: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:04.346: INFO: Waiting for responses: map[]
Mar 22 00:07:06.368: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:06.368: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:06.510: INFO: Waiting for responses: map[]
Mar 22 00:07:08.513: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:08.514: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:08.609: INFO: Waiting for responses: map[]
Mar 22 00:07:10.637: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:10.638: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:10.723: INFO: Waiting for responses: map[]
Mar 22 00:07:12.727: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:12.727: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:12.823: INFO: Waiting for responses: map[]
Mar 22 00:07:14.830: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:14.830: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:14.960: INFO: Waiting for responses: map[]
Mar 22 00:07:16.965: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:16.965: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:17.100: INFO: Waiting for responses: map[]
Mar 22 00:07:19.104: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:19.104: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:19.206: INFO: Waiting for responses: map[]
Mar 22 00:07:21.212: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:21.212: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:21.332: INFO: Waiting for responses: map[]
Mar 22 00:07:23.337: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:23.337: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:23.458: INFO: Waiting for responses: map[]
Mar 22 00:07:25.462: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:25.462: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:25.593: INFO: Waiting for responses: map[]
Mar 22 00:07:27.596: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:27.596: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:27.714: INFO: Waiting for responses: map[]
Mar 22 00:07:29.724: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:29.724: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:29.822: INFO: Waiting for responses: map[]
Mar 22 00:07:31.827: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:31.827: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:31.927: INFO: Waiting for responses: map[]
Mar 22 00:07:33.932: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:33.932: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:34.082: INFO: Waiting for responses: map[]
Mar 22 00:07:36.153: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:36.154: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:36.305: INFO: Waiting for responses: map[]
Mar 22 00:07:38.465: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:38.465: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:38.629: INFO: Waiting for responses: map[]
Mar 22 00:07:40.759: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:40.759: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:41.004: INFO: Waiting for responses: map[]
Mar 22 00:07:43.043: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:43.043: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:43.515: INFO: Waiting for responses: map[]
Mar 22 00:07:45.960: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:45.960: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:46.061: INFO: Waiting for responses: map[]
Mar 22 00:07:48.230: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:48.230: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:48.514: INFO: Waiting for responses: map[]
Mar 22 00:07:50.798: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.19:9080/dial?request=hostname&protocol=http&host=10.96.177.229&port=80&tries=1'] Namespace:nettest-2800 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:07:50.798: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:07:51.152: INFO: Waiting for responses: map[]
Mar 22 00:07:51.152: INFO: reached 10.96.177.229 after 33/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 22 00:07:51.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2800" for this suite.

• [SLOW TEST:116.610 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: http
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: http","total":54,"completed":31,"skipped":4528,"failed":8,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] ESIPP [Slow] 
  should only target nodes with endpoints
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 22 00:07:51.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Mar 22 00:07:51.330: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 22 00:07:51.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-1553" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866

S [SKIPPING] in Spec Setup (BeforeEach) [0.193 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should only target nodes with endpoints [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should be able to handle large requests: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 22 00:07:51.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
STEP: Performing setup for networking test in namespace nettest-663
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 22 00:07:51.833: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 22 00:07:51.905: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:07:54.088: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:07:55.919: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:07:58.052: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:00.196: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:01.968: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:03.909: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:05.911: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:07.910: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:09.980: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:11.911: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:13.910: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:16.017: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 22 00:08:16.154: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 22 00:08:22.637: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 22 00:08:22.637: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 22 00:08:22.741: INFO: Service node-port-service in namespace nettest-663 found.
Mar 22 00:08:22.938: INFO: Service session-affinity-service in namespace nettest-663 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 22 00:08:23.952: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 22 00:08:24.958: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) test-container-pod --> 10.96.8.9:90 (config.clusterIP)
Mar 22 00:08:24.991: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.65:9080/dial?request=echo%20nooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo&protocol=udp&host=10.96.8.9&port=90&tries=1'] Namespace:nettest-663 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:08:24.991: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:08:25.109: INFO: Waiting for responses: map[]
Mar 22 00:08:25.109: INFO: reached 10.96.8.9 after 0/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 22 00:08:25.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-663" for this suite.

• [SLOW TEST:33.754 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: udp
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp","total":54,"completed":32,"skipped":4869,"failed":8,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Firewall rule 
  control plane should not expose well-known ports
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 22 00:08:25.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Mar 22 00:08:25.184: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 22 00:08:25.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-6095" for this suite.

S [SKIPPING] in Spec Setup (BeforeEach) [0.093 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  control plane should not expose well-known ports [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should function for pod-Service(hostNetwork): udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:473
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 22 00:08:25.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service(hostNetwork): udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:473
Mar 22 00:08:25.361: INFO: skip because pods can not reach the endpoint in the same host if using UDP and hostNetwork #95565
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 22 00:08:25.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6445" for this suite.

S [SKIPPING] [0.153 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service(hostNetwork): udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:473

    skip because pods can not reach the endpoint in the same host if using UDP and hostNetwork #95565

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should update nodePort: udp [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 22 00:08:25.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: udp [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
STEP: Performing setup for networking test in namespace nettest-4372
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 22 00:08:25.565: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 22 00:08:25.660: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:08:27.795: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:08:29.699: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:08:31.682: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:34.115: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:35.669: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:37.664: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:39.664: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:41.801: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:43.699: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:45.665: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:47.665: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:08:49.724: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 22 00:08:49.953: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 22 00:08:58.443: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 22 00:08:58.443: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 22 00:08:58.587: INFO: Service node-port-service in namespace nettest-4372 found.
Mar 22 00:08:58.712: INFO: Service session-affinity-service in namespace nettest-4372 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 22 00:08:59.718: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 22 00:09:00.723: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) 172.18.0.9 (node) --> 172.18.0.9:32321 (nodeIP) and getting ALL host endpoints
Mar 22 00:09:00.741: INFO: Going to poll 172.18.0.9 on port 32321 at least 0 times, with a maximum of 34 tries before failing
Mar 22 00:09:00.745: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:00.745: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:01.874: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-1])
Mar 22 00:09:03.880: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:03.880: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:04.986: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-1])
Mar 22 00:09:07.167: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:07.167: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:08.509: INFO: Found all 2 expected endpoints: [netserver-0 netserver-1]
STEP: Deleting the node port access point
STEP: dialing(udp) 172.18.0.9 (node) --> 172.18.0.9:32321 (nodeIP) and getting ZERO host endpoints
Mar 22 00:09:23.594: INFO: Going to poll 172.18.0.9 on port 32321 at least 34 times, with a maximum of 34 tries before failing
Mar 22 00:09:23.868: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:23.868: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:24.371: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:24.371: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:09:26.376: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:26.376: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:26.494: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:26.494: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:09:28.499: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:28.499: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:28.604: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:28.604: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:09:30.609: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:30.609: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:30.688: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:30.688: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:09:32.863: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:32.863: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:32.948: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:32.948: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:09:34.999: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:34.999: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:35.192: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:35.192: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:09:37.233: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:37.233: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:37.361: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:37.361: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:09:39.407: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:39.407: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:39.512: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:39.512: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:09:41.517: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:41.518: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:41.622: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:41.622: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:09:43.625: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:43.625: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:43.713: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:43.713: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:09:45.717: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:45.717: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:45.837: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:45.837: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:09:47.841: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:47.841: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:47.953: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:47.954: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:09:49.982: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:49.982: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:50.127: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:50.127: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:09:52.132: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:52.132: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:52.222: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:52.222: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:09:54.306: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:54.307: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:54.401: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:54.401: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:09:56.582: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:56.582: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:56.711: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:56.711: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:09:58.724: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:09:58.724: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:09:58.842: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:09:58.842: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:00.848: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:00.848: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:00.942: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:00.942: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:02.947: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:02.947: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:03.073: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:03.073: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:05.078: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:05.078: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:05.185: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:05.185: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:07.190: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:07.190: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:07.324: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:07.324: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:09.328: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:09.328: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:09.437: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:09.437: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:11.563: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:11.563: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:11.682: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:11.682: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:13.685: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:13.685: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:13.760: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:13.760: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:15.778: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:15.778: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:15.874: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:15.874: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:17.916: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:17.917: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:18.041: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:18.041: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:20.047: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:20.047: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:20.145: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:20.145: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:22.168: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:22.168: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:22.281: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:22.281: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:24.284: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:24.284: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:24.387: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:24.387: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:26.733: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:26.733: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:27.117: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:27.117: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:29.234: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:29.234: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:29.326: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:29.326: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:31.573: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:31.573: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:31.736: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:31.736: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:33.821: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:33.821: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:33.934: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:33.934: INFO: Waiting for [] endpoints (expected=[], actual=[])
Mar 22 00:10:36.018: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\s*$'] Namespace:nettest-4372 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:10:36.018: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:10:36.595: INFO: Failed to execute "echo hostName | nc -w 1 -u 172.18.0.9 32321 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Mar 22 00:10:36.595: INFO: Found all 0 expected endpoints: []
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 22 00:10:36.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4372" for this suite.

• [SLOW TEST:131.309 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: udp [Slow]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]","total":54,"completed":33,"skipped":5136,"failed":8,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should function for pod-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 22 00:10:36.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153
STEP: Performing setup for networking test in namespace nettest-6861
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 22 00:10:36.907: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 22 00:10:37.471: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:10:39.539: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:10:42.007: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:10:43.660: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:10:45.502: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:10:47.679: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:10:49.575: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:10:51.479: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:10:53.476: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 22 00:10:53.483: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 22 00:10:55.488: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 22 00:10:57.534: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 22 00:10:59.489: INFO: The status of Pod netserver-1 is Running (Ready = false)
Mar 22 00:11:01.490: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 22 00:11:05.552: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 22 00:11:05.553: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 22 00:11:05.637: INFO: Service node-port-service in namespace nettest-6861 found.
Mar 22 00:11:05.745: INFO: Service session-affinity-service in namespace nettest-6861 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 22 00:11:06.823: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 22 00:11:07.827: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) test-container-pod --> 10.96.28.126:80 (config.clusterIP)
Mar 22 00:11:07.861: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.41:9080/dial?request=hostname&protocol=http&host=10.96.28.126&port=80&tries=1'] Namespace:nettest-6861 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:11:07.861: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:11:07.974: INFO: Waiting for responses: map[netserver-1:{}]
Mar 22 00:11:10.042: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.41:9080/dial?request=hostname&protocol=http&host=10.96.28.126&port=80&tries=1'] Namespace:nettest-6861 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:11:10.042: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:11:10.194: INFO: Waiting for responses: map[]
Mar 22 00:11:10.194: INFO: reached 10.96.28.126 after 1/34 tries
STEP: dialing(http) test-container-pod --> 172.18.0.9:31345 (nodeIP)
Mar 22 00:11:10.242: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.41:9080/dial?request=hostname&protocol=http&host=172.18.0.9&port=31345&tries=1'] Namespace:nettest-6861 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:11:10.242: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:11:10.437: INFO: Waiting for responses: map[netserver-0:{}]
Mar 22 00:11:12.473: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.41:9080/dial?request=hostname&protocol=http&host=172.18.0.9&port=31345&tries=1'] Namespace:nettest-6861 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:11:12.473: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:11:12.810: INFO: Waiting for responses: map[netserver-0:{}]
Mar 22 00:11:14.965: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.41:9080/dial?request=hostname&protocol=http&host=172.18.0.9&port=31345&tries=1'] Namespace:nettest-6861 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:11:14.965: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:11:15.900: INFO: Waiting for responses: map[netserver-0:{}]
Mar 22 00:11:17.970: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.41:9080/dial?request=hostname&protocol=http&host=172.18.0.9&port=31345&tries=1'] Namespace:nettest-6861 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:11:17.970: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:11:18.222: INFO: Waiting for responses: map[netserver-0:{}]
Mar 22 00:11:20.270: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.41:9080/dial?request=hostname&protocol=http&host=172.18.0.9&port=31345&tries=1'] Namespace:nettest-6861 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:11:20.271: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:11:20.440: INFO: Waiting for responses: map[netserver-0:{}]
Mar 22 00:11:22.525: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.41:9080/dial?request=hostname&protocol=http&host=172.18.0.9&port=31345&tries=1'] Namespace:nettest-6861 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:11:22.525: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:11:22.744: INFO: Waiting for responses: map[netserver-0:{}]
Mar 22 00:11:24.814: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.41:9080/dial?request=hostname&protocol=http&host=172.18.0.9&port=31345&tries=1'] Namespace:nettest-6861 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:11:24.814: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:11:25.023: INFO: Waiting for responses: map[netserver-0:{}]
Mar 22 00:11:27.057: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.41:9080/dial?request=hostname&protocol=http&host=172.18.0.9&port=31345&tries=1'] Namespace:nettest-6861 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:11:27.057: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:11:27.331: INFO: Waiting for responses: map[]
Mar 22 00:11:27.331: INFO: reached 172.18.0.9 after 7/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 22 00:11:27.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6861" for this suite.

• [SLOW TEST:50.804 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: http
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: http","total":54,"completed":34,"skipped":5238,"failed":8,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should check NodePort out-of-range
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 22 00:11:27.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should check NodePort out-of-range
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494
STEP: creating service nodeport-range-test with type NodePort in namespace services-8439
STEP: changing service nodeport-range-test to out-of-range NodePort 25895
STEP: deleting original service nodeport-range-test
STEP: creating service nodeport-range-test with out-of-range NodePort 25895
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 22 00:11:28.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8439" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
•{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":54,"completed":35,"skipped":5308,"failed":8,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Conntrack 
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 22 00:11:28.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-607
STEP: creating a client pod for probing the service svc-udp
Mar 22 00:11:28.899: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:11:31.069: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:11:33.290: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:11:35.002: INFO: The status of Pod pod-client is Running (Ready = true)
Mar 22 00:11:35.129: INFO: Pod client logs: Mon Mar 22 00:11:33 UTC 2021
Mon Mar 22 00:11:33 UTC 2021 Try: 1

Mon Mar 22 00:11:33 UTC 2021 Try: 2

Mon Mar 22 00:11:33 UTC 2021 Try: 3

Mon Mar 22 00:11:33 UTC 2021 Try: 4

Mon Mar 22 00:11:33 UTC 2021 Try: 5

Mon Mar 22 00:11:33 UTC 2021 Try: 6

Mon Mar 22 00:11:33 UTC 2021 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Mar 22 00:11:35.168: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:11:37.380: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:11:39.439: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:11:41.174: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-607 to expose endpoints map[pod-server-1:[80]]
Mar 22 00:11:41.229: INFO: successfully validated that service svc-udp in namespace conntrack-607 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 172.18.0.13
STEP: creating a second backend pod pod-server-2 for the service svc-udp
Mar 22 00:11:51.690: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:11:54.140: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:11:55.872: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:11:57.720: INFO: The status of Pod pod-server-2 is Running (Ready = true)
Mar 22 00:11:57.767: INFO: Cleaning up pod-server-1 pod
Mar 22 00:11:57.858: INFO: Waiting for pod pod-server-1 to disappear
Mar 22 00:11:57.877: INFO: Pod pod-server-1 no longer exists
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-607 to expose endpoints map[pod-server-2:[80]]
Mar 22 00:11:58.139: INFO: successfully validated that service svc-udp in namespace conntrack-607 exposes endpoints map[pod-server-2:[80]]
STEP: checking client pod connected to the backend 2 on Node IP 172.18.0.13
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 22 00:12:08.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-607" for this suite.

• [SLOW TEST:39.960 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":54,"completed":36,"skipped":5426,"failed":8,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Services 
  should be able to handle large requests: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Mar 22 00:12:08.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
STEP: Performing setup for networking test in namespace nettest-2071
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Mar 22 00:12:08.821: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Mar 22 00:12:08.962: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:12:10.987: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:12:13.188: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Mar 22 00:12:14.999: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:12:17.242: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:12:19.373: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:12:21.074: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:12:23.129: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:12:25.008: INFO: The status of Pod netserver-0 is Running (Ready = false)
Mar 22 00:12:27.733: INFO: The status of Pod netserver-0 is Running (Ready = true)
Mar 22 00:12:28.121: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Mar 22 00:12:34.928: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Mar 22 00:12:34.928: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Mar 22 00:12:35.732: INFO: Service node-port-service in namespace nettest-2071 found.
Mar 22 00:12:36.295: INFO: Service session-affinity-service in namespace nettest-2071 found.
STEP: Waiting for NodePort service to expose endpoint
Mar 22 00:12:37.314: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Mar 22 00:12:38.423: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) test-container-pod --> 10.96.208.13:80 (config.clusterIP)
Mar 22 00:12:38.664: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.53:9080/dial?request=echo?msg=42424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242424242&protocol=http&host=10.96.208.13&port=80&tries=1'] Namespace:nettest-2071 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Mar 22 00:12:38.664: INFO: >>> kubeConfig: /root/.kube/config
Mar 22 00:12:38.863: INFO: Waiting for responses: map[]
Mar 22 00:12:38.863: INFO: reached 10.96.208.13 after 0/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Mar 22 00:12:38.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2071" for this suite.

• [SLOW TEST:30.479 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: http
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: http","total":54,"completed":37,"skipped":5469,"failed":8,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMar 22 00:12:39.016: INFO: Running AfterSuite actions on all nodes
Mar 22 00:12:39.016: INFO: Running AfterSuite actions on node 1
Mar 22 00:12:39.016: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/sig_network/junit_01.xml
{"msg":"Test Suite completed","total":54,"completed":37,"skipped":5692,"failed":8,"failures":["[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] Networking Granular Checks: Services should function for node-Service: http","[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless","[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}


Summarizing 8 Failures:

[Fail] [sig-network] Services [It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1201

[Fail] [sig-network] Networking Granular Checks: Services [It] should function for node-Service: http 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:829

[Fail] [sig-network] DNS configMap nameserver Forward PTR lookup [It] should forward PTR records lookup to upstream nameserver [Slow][Serial] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:202

[Fail] [sig-network] Services [It] should allow pods to hairpin back to themselves through services 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1012

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

[Fail] [sig-network] Services [It] should implement service.kubernetes.io/headless 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2679

[Fail] [sig-network] Services [It] should implement service.kubernetes.io/service-proxy-name 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1889

[Fail] [sig-network] Services [It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/util.go:133

Ran 45 of 5737 Specs in 3114.964 seconds
FAIL! -- 37 Passed | 8 Failed | 0 Pending | 5692 Skipped
--- FAIL: TestE2E (3115.13s)
FAIL