Running Suite: Kubernetes e2e suite =================================== Random Seed: 1634961106 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Oct 23 03:51:48.537: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:51:48.539: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 23 03:51:48.566: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 03:51:48.633: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 03:51:48.633: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 03:51:48.633: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 03:51:48.633: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 03:51:48.633: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 23 03:51:48.643: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 23 03:51:48.643: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 23 03:51:48.643: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 23 03:51:48.643: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 23 03:51:48.643: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 23 03:51:48.643: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 23 03:51:48.643: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 23 03:51:48.643: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 23 03:51:48.643: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 23 03:51:48.643: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 23 03:51:48.643: INFO: e2e test version: v1.21.5 Oct 23 03:51:48.644: INFO: kube-apiserver version: v1.21.1 Oct 23 03:51:48.644: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:51:48.650: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Oct 23 03:51:48.649: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:51:48.670: INFO: Cluster IP family: ipv4 SSS ------------------------------ Oct 23 03:51:48.653: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:51:48.675: INFO: Cluster IP family: ipv4 S ------------------------------ Oct 23 03:51:48.656: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:51:48.676: INFO: Cluster IP family: ipv4 Oct 23 03:51:48.655: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:51:48.676: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ Oct 23 03:51:48.664: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:51:48.684: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Oct 23 03:51:48.688: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:51:48.708: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ Oct 23 03:51:48.692: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:51:48.713: INFO: Cluster IP family: ipv4 SSSS ------------------------------ Oct 23 03:51:48.694: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:51:48.715: INFO: Cluster IP family: ipv4 SS ------------------------------ Oct 23 03:51:48.694: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:51:48.715: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:51:49.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy W1023 03:51:49.109118 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 03:51:49.109: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 03:51:49.111: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:91 Oct 23 03:51:49.133: INFO: (0) /api/v1/nodes/node1/proxy/logs/:
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
W1023 03:51:49.243338      35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 23 03:51:49.243: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 23 03:51:49.245: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:85
Oct 23 03:51:49.254: INFO: (0) /api/v1/nodes/node1:10250/proxy/logs/: 
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log
anaconda/
audit/
boot.log>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W1023 03:51:49.249824      29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 23 03:51:49.250: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 23 03:51:49.251: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide unchanging, static URL paths for kubernetes api services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:112
STEP: testing: /healthz
STEP: testing: /api
STEP: testing: /apis
STEP: testing: /metrics
STEP: testing: /openapi/v2
STEP: testing: /version
STEP: testing: /logs
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:51:49.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9311" for this suite.

•SSS
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":1,"skipped":172,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:51:49.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W1023 03:51:49.252104      26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 23 03:51:49.252: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 23 03:51:49.254: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
STEP: creating service nodeport-reuse with type NodePort in namespace services-2944
STEP: deleting original service nodeport-reuse
Oct 23 03:51:49.276: INFO: Creating new host exec pod
Oct 23 03:51:49.290: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:51.294: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:53.294: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:55.294: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:57.294: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:59.293: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:01.296: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:03.298: INFO: The status of Pod hostexec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:05.295: INFO: The status of Pod hostexec is Running (Ready = true)
Oct 23 03:52:05.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2944 exec hostexec -- /bin/sh -x -c ! ss -ant46 'sport = :32645' | tail -n +2 | grep LISTEN'
Oct 23 03:52:05.957: INFO: stderr: "+ tail -n +2\n+ grep LISTEN\n+ ss -ant46 'sport = :32645'\n"
Oct 23 03:52:05.957: INFO: stdout: ""
STEP: creating service nodeport-reuse with same NodePort 32645
STEP: deleting service nodeport-reuse in namespace services-2944
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:05.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2944" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:16.768 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1561
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":1,"skipped":187,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:06.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Oct 23 03:52:06.551: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:06.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-5094" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should only target nodes with endpoints [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:959

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:06.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename ingress
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:69
Oct 23 03:52:06.637: INFO: Found ClusterRoles; assuming RBAC is enabled.
[BeforeEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:688
Oct 23 03:52:06.742: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:706
STEP: No ingress created, no cleanup necessary
[AfterEach] [sig-network] Loadbalancing: L7
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:06.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-1205" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.145 seconds]
[sig-network] Loadbalancing: L7
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  [Slow] Nginx
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:685
    should conform to Ingress spec [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:722

    Only supported for providers [gce gke] (not local)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingress.go:689
------------------------------
SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:51:49.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
STEP: Preparing a test DNS service with injected DNS names...
Oct 23 03:51:49.671: INFO: Created pod &Pod{ObjectMeta:{e2e-configmap-dns-server-1c0b59fb-6de5-4fa5-b9c0-bd91df9993eb  dns-1469  cead8934-216c-4451-83fc-142fb313eef0 142571 0 2021-10-23 03:51:49 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2021-10-23 03:51:49 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"coredns-config\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:name":{}},"f:name":{}}}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:coredns-config,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:&ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:e2e-coredns-configmap-gpbd4,},Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-tpsnr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[/coredns],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:coredns-config,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tpsnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Oct 23 03:52:11.681: INFO: testServerIP is 10.244.4.123
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Oct 23 03:52:11.690: INFO: Created pod &Pod{ObjectMeta:{e2e-dns-utils  dns-1469  d6146067-9d46-4071-8107-3e2e0c918019 142954 0 2021-10-23 03:52:11 +0000 UTC   map[] map[kubernetes.io/psp:collectd] [] []  [{e2e.test Update v1 2021-10-23 03:52:11 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:options":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zgvtx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zgvtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[10.244.4.123],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{PodDNSConfigOption{Name:ndots,Value:*2,},},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS option is configured on pod...
Oct 23 03:52:17.696: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-1469 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:52:17.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized name server and search path are working...
Oct 23 03:52:17.785: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-1469 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:52:17.785: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:52:17.875: INFO: Deleting pod e2e-dns-utils...
Oct 23 03:52:17.883: INFO: Deleting pod e2e-configmap-dns-server-1c0b59fb-6de5-4fa5-b9c0-bd91df9993eb...
Oct 23 03:52:17.889: INFO: Deleting configmap e2e-coredns-configmap-gpbd4...
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:17.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1469" for this suite.


• [SLOW TEST:28.261 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":2,"skipped":216,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:17.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Oct 23 03:52:18.019: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:18.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-3169" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have correct firewall rules for e2e cluster [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:51:48.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W1023 03:51:48.845672      30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 23 03:51:48.845: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 23 03:51:48.849: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: http [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369
STEP: Performing setup for networking test in namespace nettest-2819
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:51:48.962: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:51:48.991: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:50.996: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:52.996: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:54.997: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:56.994: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:51:58.994: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:00.997: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:02.996: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:04.995: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:06.995: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:08.994: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:52:08.998: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:11.002: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:13.003: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:15.003: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:17.002: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:19.002: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:21.003: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:52:29.043: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:52:29.043: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:52:29.049: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:29.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2819" for this suite.


S [SKIPPING] [40.242 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: http [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:369

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:51:49.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W1023 03:51:49.357083      39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 23 03:51:49.357: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 23 03:51:49.359: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for multiple endpoint-Services with same selector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289
STEP: Performing setup for networking test in namespace nettest-4214
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:51:49.467: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:51:49.502: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:51.507: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:53.506: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:55.507: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:57.506: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:51:59.505: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:01.506: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:03.506: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:05.507: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:07.506: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:09.506: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:11.510: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:52:11.515: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:13.517: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:15.520: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:17.519: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:19.518: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:21.519: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:52:29.542: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:52:29.542: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:52:29.549: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:29.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4214" for this suite.


S [SKIPPING] [40.232 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for multiple endpoint-Services with same selector [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:289

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:51:48.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W1023 03:51:48.995144      33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 23 03:51:48.995: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 23 03:51:48.997: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should check kube-proxy urls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138
STEP: Performing setup for networking test in namespace nettest-5607
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:51:49.106: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:51:49.138: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:51.143: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:53.142: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:55.146: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:57.142: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:51:59.142: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:01.144: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:03.141: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:05.143: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:07.142: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:09.142: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:11.143: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:52:11.147: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:13.151: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:15.151: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:17.151: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:19.151: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:21.153: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:52:33.192: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:52:33.192: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:52:33.198: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:33.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5607" for this suite.


S [SKIPPING] [44.239 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should check kube-proxy urls [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:138

  Requires at least 2 nodes (not -1)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:51:49.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
W1023 03:51:49.101361      24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 23 03:51:49.101: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 23 03:51:49.103: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153
STEP: Performing setup for networking test in namespace nettest-7200
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:51:49.214: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:51:49.254: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:51.258: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:53.259: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:55.262: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:57.257: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:51:59.270: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:01.263: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:03.261: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:05.263: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:07.259: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:09.257: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:11.259: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:52:11.264: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:13.269: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:15.268: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:17.268: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:19.268: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:52:21.274: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:52:33.299: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:52:33.299: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:52:33.306: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:33.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7200" for this suite.


S [SKIPPING] [44.237 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:153

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:51:49.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
STEP: creating service externalip-test with type=clusterIP in namespace services-4053
STEP: creating replication controller externalip-test in namespace services-4053
I1023 03:51:49.835839      36 runners.go:190] Created replication controller with name: externalip-test, namespace: services-4053, replica count: 2
I1023 03:51:52.887653      36 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:51:55.888403      36 runners.go:190] externalip-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:51:58.888808      36 runners.go:190] externalip-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:01.891300      36 runners.go:190] externalip-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:04.892330      36 runners.go:190] externalip-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:07.893144      36 runners.go:190] externalip-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:10.894936      36 runners.go:190] externalip-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 23 03:52:10.895: INFO: Creating new exec pod
Oct 23 03:52:29.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4053 exec execpodthkkj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalip-test 80'
Oct 23 03:52:31.448: INFO: stderr: "+ nc -v -t -w 2 externalip-test 80\n+ echo hostName\nConnection to externalip-test 80 port [tcp/http] succeeded!\n"
Oct 23 03:52:31.449: INFO: stdout: "externalip-test-w7xqm"
Oct 23 03:52:31.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4053 exec execpodthkkj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.12.164 80'
Oct 23 03:52:31.815: INFO: stderr: "+ nc -v -t -w 2 10.233.12.164 80\nConnection to 10.233.12.164 80 port [tcp/http] succeeded!\n+ echo hostName\n"
Oct 23 03:52:31.815: INFO: stdout: ""
Oct 23 03:52:32.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4053 exec execpodthkkj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.12.164 80'
Oct 23 03:52:33.080: INFO: stderr: "+ nc -v -t -w 2 10.233.12.164 80\nConnection to 10.233.12.164 80 port [tcp/http] succeeded!\n+ echo hostName\n"
Oct 23 03:52:33.080: INFO: stdout: "externalip-test-8r6b6"
Oct 23 03:52:33.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4053 exec execpodthkkj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Oct 23 03:52:33.363: INFO: stderr: "+ nc -v -t -w 2 203.0.113.250 80\n+ echo hostName\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n"
Oct 23 03:52:33.363: INFO: stdout: ""
Oct 23 03:52:34.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4053 exec execpodthkkj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Oct 23 03:52:34.805: INFO: stderr: "+ nc -v -t -w 2 203.0.113.250 80\n+ echo hostName\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n"
Oct 23 03:52:34.806: INFO: stdout: ""
Oct 23 03:52:35.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-4053 exec execpodthkkj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Oct 23 03:52:35.948: INFO: stderr: "+ nc -v -t -w 2 203.0.113.250 80\n+ echo hostName\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n"
Oct 23 03:52:35.948: INFO: stdout: "externalip-test-w7xqm"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:35.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4053" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:46.154 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":2,"skipped":407,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:29.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
STEP: creating a service with no endpoints
STEP: creating execpod-noendpoints on node node1
Oct 23 03:52:29.655: INFO: Creating new exec pod
Oct 23 03:52:35.683: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node node1
Oct 23 03:52:35.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2278 exec execpod-noendpointsbrhrl -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Oct 23 03:52:37.027: INFO: rc: 1
Oct 23 03:52:37.027: INFO: error contained 'REFUSED', as expected: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2278 exec execpod-noendpointsbrhrl -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:37.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2278" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:7.411 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":1,"skipped":264,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:37.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Oct 23 03:52:37.544: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:37.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-3575" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should handle updates to ExternalTrafficPolicy field [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1095

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:37.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should prevent NodePort collisions
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1440
STEP: creating service nodeport-collision-1 with type NodePort in namespace services-5535
STEP: creating service nodeport-collision-2 with conflicting NodePort
STEP: deleting service nodeport-collision-1 to release NodePort
STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort
STEP: deleting service nodeport-collision-2 in namespace services-5535
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:37.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5535" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":2,"skipped":539,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:18.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename no-snat-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
STEP: creating a test pod on each Node
STEP: waiting for all of the no-snat-test pods to be scheduled and running
STEP: sending traffic from each pod to the others and checking that SNAT does not occur
Oct 23 03:52:38.651: INFO: Waiting up to 2m0s to get response from 10.244.3.197:8080
Oct 23 03:52:38.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-test2tq2w -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.197:8080/clientip'
Oct 23 03:52:38.916: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.197:8080/clientip\n"
Oct 23 03:52:38.916: INFO: stdout: "10.244.1.8:39958"
STEP: Verifying the preserved source ip
Oct 23 03:52:38.916: INFO: Waiting up to 2m0s to get response from 10.244.4.131:8080
Oct 23 03:52:38.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-test2tq2w -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.131:8080/clientip'
Oct 23 03:52:39.153: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.131:8080/clientip\n"
Oct 23 03:52:39.153: INFO: stdout: "10.244.1.8:41216"
STEP: Verifying the preserved source ip
Oct 23 03:52:39.153: INFO: Waiting up to 2m0s to get response from 10.244.2.7:8080
Oct 23 03:52:39.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-test2tq2w -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.7:8080/clientip'
Oct 23 03:52:39.397: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.7:8080/clientip\n"
Oct 23 03:52:39.397: INFO: stdout: "10.244.1.8:46946"
STEP: Verifying the preserved source ip
Oct 23 03:52:39.397: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Oct 23 03:52:39.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-test2tq2w -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Oct 23 03:52:39.639: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Oct 23 03:52:39.639: INFO: stdout: "10.244.1.8:46582"
STEP: Verifying the preserved source ip
Oct 23 03:52:39.639: INFO: Waiting up to 2m0s to get response from 10.244.1.8:8080
Oct 23 03:52:39.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-test2zdz4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.8:8080/clientip'
Oct 23 03:52:40.087: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.8:8080/clientip\n"
Oct 23 03:52:40.087: INFO: stdout: "10.244.3.197:36252"
STEP: Verifying the preserved source ip
Oct 23 03:52:40.087: INFO: Waiting up to 2m0s to get response from 10.244.4.131:8080
Oct 23 03:52:40.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-test2zdz4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.131:8080/clientip'
Oct 23 03:52:40.413: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.131:8080/clientip\n"
Oct 23 03:52:40.413: INFO: stdout: "10.244.3.197:51956"
STEP: Verifying the preserved source ip
Oct 23 03:52:40.413: INFO: Waiting up to 2m0s to get response from 10.244.2.7:8080
Oct 23 03:52:40.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-test2zdz4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.7:8080/clientip'
Oct 23 03:52:40.690: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.7:8080/clientip\n"
Oct 23 03:52:40.690: INFO: stdout: "10.244.3.197:45168"
STEP: Verifying the preserved source ip
Oct 23 03:52:40.690: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Oct 23 03:52:40.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-test2zdz4 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Oct 23 03:52:41.120: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Oct 23 03:52:41.120: INFO: stdout: "10.244.3.197:43554"
STEP: Verifying the preserved source ip
Oct 23 03:52:41.120: INFO: Waiting up to 2m0s to get response from 10.244.1.8:8080
Oct 23 03:52:41.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-testldpb8 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.8:8080/clientip'
Oct 23 03:52:41.671: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.8:8080/clientip\n"
Oct 23 03:52:41.671: INFO: stdout: "10.244.4.131:44060"
STEP: Verifying the preserved source ip
Oct 23 03:52:41.671: INFO: Waiting up to 2m0s to get response from 10.244.3.197:8080
Oct 23 03:52:41.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-testldpb8 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.197:8080/clientip'
Oct 23 03:52:41.915: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.197:8080/clientip\n"
Oct 23 03:52:41.915: INFO: stdout: "10.244.4.131:59582"
STEP: Verifying the preserved source ip
Oct 23 03:52:41.915: INFO: Waiting up to 2m0s to get response from 10.244.2.7:8080
Oct 23 03:52:41.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-testldpb8 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.7:8080/clientip'
Oct 23 03:52:42.148: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.7:8080/clientip\n"
Oct 23 03:52:42.148: INFO: stdout: "10.244.4.131:40228"
STEP: Verifying the preserved source ip
Oct 23 03:52:42.148: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Oct 23 03:52:42.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-testldpb8 -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Oct 23 03:52:42.398: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Oct 23 03:52:42.398: INFO: stdout: "10.244.4.131:37676"
STEP: Verifying the preserved source ip
Oct 23 03:52:42.398: INFO: Waiting up to 2m0s to get response from 10.244.1.8:8080
Oct 23 03:52:42.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-testqm4hx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.8:8080/clientip'
Oct 23 03:52:42.644: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.8:8080/clientip\n"
Oct 23 03:52:42.644: INFO: stdout: "10.244.2.7:59798"
STEP: Verifying the preserved source ip
Oct 23 03:52:42.644: INFO: Waiting up to 2m0s to get response from 10.244.3.197:8080
Oct 23 03:52:42.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-testqm4hx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.197:8080/clientip'
Oct 23 03:52:42.877: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.197:8080/clientip\n"
Oct 23 03:52:42.877: INFO: stdout: "10.244.2.7:46216"
STEP: Verifying the preserved source ip
Oct 23 03:52:42.877: INFO: Waiting up to 2m0s to get response from 10.244.4.131:8080
Oct 23 03:52:42.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-testqm4hx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.131:8080/clientip'
Oct 23 03:52:43.130: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.131:8080/clientip\n"
Oct 23 03:52:43.130: INFO: stdout: "10.244.2.7:59884"
STEP: Verifying the preserved source ip
Oct 23 03:52:43.130: INFO: Waiting up to 2m0s to get response from 10.244.0.10:8080
Oct 23 03:52:43.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-testqm4hx -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip'
Oct 23 03:52:43.360: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.0.10:8080/clientip\n"
Oct 23 03:52:43.360: INFO: stdout: "10.244.2.7:35272"
STEP: Verifying the preserved source ip
Oct 23 03:52:43.360: INFO: Waiting up to 2m0s to get response from 10.244.1.8:8080
Oct 23 03:52:43.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-testtzpck -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.1.8:8080/clientip'
Oct 23 03:52:43.609: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.1.8:8080/clientip\n"
Oct 23 03:52:43.609: INFO: stdout: "10.244.0.10:38918"
STEP: Verifying the preserved source ip
Oct 23 03:52:43.609: INFO: Waiting up to 2m0s to get response from 10.244.3.197:8080
Oct 23 03:52:43.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-testtzpck -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.3.197:8080/clientip'
Oct 23 03:52:43.848: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.3.197:8080/clientip\n"
Oct 23 03:52:43.848: INFO: stdout: "10.244.0.10:59100"
STEP: Verifying the preserved source ip
Oct 23 03:52:43.848: INFO: Waiting up to 2m0s to get response from 10.244.4.131:8080
Oct 23 03:52:43.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-testtzpck -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.4.131:8080/clientip'
Oct 23 03:52:44.084: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.4.131:8080/clientip\n"
Oct 23 03:52:44.084: INFO: stdout: "10.244.0.10:58690"
STEP: Verifying the preserved source ip
Oct 23 03:52:44.084: INFO: Waiting up to 2m0s to get response from 10.244.2.7:8080
Oct 23 03:52:44.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=no-snat-test-9006 exec no-snat-testtzpck -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.244.2.7:8080/clientip'
Oct 23 03:52:44.326: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.244.2.7:8080/clientip\n"
Oct 23 03:52:44.326: INFO: stdout: "10.244.0.10:50672"
STEP: Verifying the preserved source ip
[AfterEach] [sig-network] NoSNAT [Feature:NoSNAT] [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:44.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "no-snat-test-9006" for this suite.


• [SLOW TEST:25.787 seconds]
[sig-network] NoSNAT [Feature:NoSNAT] [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Should be able to send traffic between Pods without SNAT
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/no_snat.go:64
------------------------------
{"msg":"PASSED [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT","total":-1,"completed":3,"skipped":538,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:33.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
Oct 23 03:52:33.382: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:35.386: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:37.386: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Oct 23 03:52:37.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8685 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Oct 23 03:52:37.945: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n"
Oct 23 03:52:37.945: INFO: stdout: "iptables"
Oct 23 03:52:37.945: INFO: proxyMode: iptables
Oct 23 03:52:37.953: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Oct 23 03:52:37.955: INFO: Pod kube-proxy-mode-detector no longer exists
STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-8685
Oct 23 03:52:37.961: INFO: sourceip-test cluster ip: 10.233.44.73
STEP: Picking 2 Nodes to test whether source IP is preserved or not
STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip
Oct 23 03:52:37.977: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:39.982: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:41.981: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:43.981: INFO: The status of Pod echo-sourceip is Running (Ready = true)
STEP: waiting up to 3m0s for service sourceip-test in namespace services-8685 to expose endpoints map[echo-sourceip:[8080]]
Oct 23 03:52:43.989: INFO: successfully validated that service sourceip-test in namespace services-8685 exposes endpoints map[echo-sourceip:[8080]]
STEP: Creating pause pod deployment
Oct 23 03:52:43.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Oct 23 03:52:46.002: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770557964, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770557964, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770557964, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770557963, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-5d67c4bf96\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 23 03:52:47.999: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770557964, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770557964, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770557967, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770557963, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-5d67c4bf96\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 23 03:52:50.007: INFO: Waiting up to 2m0s to get response from 10.233.44.73:8080
Oct 23 03:52:50.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8685 exec pause-pod-5d67c4bf96-9nltt -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.44.73:8080/clientip'
Oct 23 03:52:50.449: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.44.73:8080/clientip\n"
Oct 23 03:52:50.450: INFO: stdout: "10.244.3.207:46940"
STEP: Verifying the preserved source ip
Oct 23 03:52:50.450: INFO: Waiting up to 2m0s to get response from 10.233.44.73:8080
Oct 23 03:52:50.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-8685 exec pause-pod-5d67c4bf96-s92pm -- /bin/sh -x -c curl -q -s --connect-timeout 30 10.233.44.73:8080/clientip'
Oct 23 03:52:50.713: INFO: stderr: "+ curl -q -s --connect-timeout 30 10.233.44.73:8080/clientip\n"
Oct 23 03:52:50.713: INFO: stdout: "10.244.4.139:56764"
STEP: Verifying the preserved source ip
Oct 23 03:52:50.713: INFO: Deleting deployment
Oct 23 03:52:50.719: INFO: Cleaning up the echo server pod
Oct 23 03:52:50.724: INFO: Cleaning up the sourceip test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:50.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8685" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:17.401 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:903
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":1,"skipped":121,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:50.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should provide Internet connection for containers [Feature:Networking-IPv4]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:97
STEP: Running container which tries to connect to 8.8.8.8
Oct 23 03:52:50.949: INFO: Waiting up to 5m0s for pod "connectivity-test" in namespace "nettest-9278" to be "Succeeded or Failed"
Oct 23 03:52:50.951: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016209ms
Oct 23 03:52:52.955: INFO: Pod "connectivity-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006128708s
Oct 23 03:52:54.959: INFO: Pod "connectivity-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01014865s
STEP: Saw pod success
Oct 23 03:52:54.959: INFO: Pod "connectivity-test" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:54.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9278" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]","total":-1,"completed":2,"skipped":163,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:55.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Oct 23 03:52:55.130: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:55.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-1795" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.037 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work from pods [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:1036

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:29.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198
STEP: Performing setup for networking test in namespace nettest-1345
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:52:29.296: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:52:29.328: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:31.332: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:33.332: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:35.334: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:37.332: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:39.332: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:41.333: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:43.333: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:45.333: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:47.332: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:49.333: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:51.334: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:52:51.339: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:52:57.379: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:52:57.379: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:52:57.385: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:52:57.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1345" for this suite.


S [SKIPPING] [28.219 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:198

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:33.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for service endpoints using hostNetwork
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474
STEP: Performing setup for networking test in namespace nettest-2851
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:52:33.618: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:52:33.654: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:35.658: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:37.657: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:39.657: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:41.659: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:43.656: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:45.658: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:47.658: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:49.657: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:51.659: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:53.657: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:55.658: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:52:55.663: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:53:01.725: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:53:01.725: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:01.732: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:01.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2851" for this suite.


S [SKIPPING] [28.236 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for service endpoints using hostNetwork [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:474

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:36.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: http [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416
STEP: Performing setup for networking test in namespace nettest-4532
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:52:36.398: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:52:36.427: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:38.431: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:40.431: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:42.431: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:44.432: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:46.432: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:48.433: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:50.432: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:52.431: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:54.432: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:56.432: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:58.432: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:52:58.436: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:53:02.457: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:53:02.457: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:02.465: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:02.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4532" for this suite.


S [SKIPPING] [26.212 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: http [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:416

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:02.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should check NodePort out-of-range
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1494
STEP: creating service nodeport-range-test with type NodePort in namespace services-3641
STEP: changing service nodeport-range-test to out-of-range NodePort 21357
STEP: deleting original service nodeport-range-test
STEP: creating service nodeport-range-test with out-of-range NodePort 21357
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:02.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3641" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":3,"skipped":701,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:57.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
STEP: creating a TCP service hairpin-test with type=ClusterIP in namespace services-3529
Oct 23 03:52:57.436: INFO: hairpin-test cluster ip: 10.233.28.52
STEP: creating a client/server pod
Oct 23 03:52:57.456: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:59.490: INFO: The status of Pod hairpin is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:01.462: INFO: The status of Pod hairpin is Running (Ready = true)
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-3529 to expose endpoints map[hairpin:[8080]]
Oct 23 03:53:01.472: INFO: successfully validated that service hairpin-test in namespace services-3529 exposes endpoints map[hairpin:[8080]]
STEP: Checking if the pod can reach itself
Oct 23 03:53:02.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3529 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 hairpin-test 8080'
Oct 23 03:53:02.958: INFO: stderr: "+ nc -v -t -w 2 hairpin-test 8080\n+ echo hostName\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n"
Oct 23 03:53:02.958: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Oct 23 03:53:02.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3529 exec hairpin -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.28.52 8080'
Oct 23 03:53:03.772: INFO: stderr: "+ nc -v -t -w 2 10.233.28.52 8080\nConnection to 10.233.28.52 8080 port [tcp/http-alt] succeeded!\n+ echo hostName\n"
Oct 23 03:53:03.772: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:03.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3529" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:6.374 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":1,"skipped":97,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:44.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351
STEP: Performing setup for networking test in namespace nettest-2146
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:52:44.481: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:52:44.511: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:46.515: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:48.515: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:50.516: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:52.516: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:54.517: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:56.518: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:58.516: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:00.517: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:02.514: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:04.515: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:06.516: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:53:06.523: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:53:12.546: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:53:12.546: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:12.554: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:12.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2146" for this suite.


S [SKIPPING] [28.196 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:351

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:12.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename networkpolicies
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_legacy.go:2196
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Oct 23 03:53:12.820: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Oct 23 03:53:12.823: INFO: starting watch
STEP: patching
STEP: updating
Oct 23 03:53:12.830: INFO: waiting for watch events with expected annotations
Oct 23 03:53:12.830: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Oct 23 03:53:12.830: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:12.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-5909" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":4,"skipped":655,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:37.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename network-perf
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
Oct 23 03:52:37.740: INFO: deploying iperf2 server
Oct 23 03:52:37.744: INFO: Waiting for deployment "iperf2-server-deployment" to complete
Oct 23 03:52:37.746: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Oct 23 03:52:39.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770557957, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770557957, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770557957, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770557957, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 23 03:52:41.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770557957, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770557957, loc:(*time.Location)(0x9e12f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63770557957, loc:(*time.Location)(0x9e12f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63770557957, loc:(*time.Location)(0x9e12f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"iperf2-server-deployment-59979d877\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 23 03:52:43.761: INFO: waiting for iperf2 server endpoints
Oct 23 03:52:45.765: INFO: found iperf2 server endpoints
Oct 23 03:52:45.765: INFO: waiting for client pods to be running
Oct 23 03:52:47.768: INFO: all client pods are ready: 2 pods
Oct 23 03:52:47.771: INFO: server pod phase Running
Oct 23 03:52:47.771: INFO: server pod condition 0: {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 03:52:37 +0000 UTC Reason: Message:}
Oct 23 03:52:47.771: INFO: server pod condition 1: {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 03:52:43 +0000 UTC Reason: Message:}
Oct 23 03:52:47.771: INFO: server pod condition 2: {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 03:52:43 +0000 UTC Reason: Message:}
Oct 23 03:52:47.771: INFO: server pod condition 3: {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-10-23 03:52:37 +0000 UTC Reason: Message:}
Oct 23 03:52:47.771: INFO: server pod container status 0: {Name:iperf2-server State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2021-10-23 03:52:42 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:docker://a2ca3e97f1c01819bb41273e9adf4c155b372ef5954527f12f98c8af9e219a3f Started:0xc00478cbac}
Oct 23 03:52:47.771: INFO: found 2 matching client pods
Oct 23 03:52:47.774: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-9969 PodName:iperf2-clients-hjwr6 ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:52:47.774: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:52:47.884: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Oct 23 03:52:47.884: INFO: iperf version: 
Oct 23 03:52:47.884: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-hjwr6 (node node2)
Oct 23 03:52:47.886: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-9969 PodName:iperf2-clients-hjwr6 ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:52:47.886: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:03.072: INFO: Exec stderr: ""
Oct 23 03:53:03.072: INFO: output from exec on client pod iperf2-clients-hjwr6 (node node2): 
20211023035249.033,10.244.4.138,51080,10.233.40.78,6789,3,0.0-1.0,119930880,959447040
20211023035250.022,10.244.4.138,51080,10.233.40.78,6789,3,1.0-2.0,117833728,942669824
20211023035251.029,10.244.4.138,51080,10.233.40.78,6789,3,2.0-3.0,117047296,936378368
20211023035252.037,10.244.4.138,51080,10.233.40.78,6789,3,3.0-4.0,117964800,943718400
20211023035253.024,10.244.4.138,51080,10.233.40.78,6789,3,4.0-5.0,114819072,918552576
20211023035254.016,10.244.4.138,51080,10.233.40.78,6789,3,5.0-6.0,101842944,814743552
20211023035255.023,10.244.4.138,51080,10.233.40.78,6789,3,6.0-7.0,117964800,943718400
20211023035256.031,10.244.4.138,51080,10.233.40.78,6789,3,7.0-8.0,116785152,934281216
20211023035257.018,10.244.4.138,51080,10.233.40.78,6789,3,8.0-9.0,117964800,943718400
20211023035258.026,10.244.4.138,51080,10.233.40.78,6789,3,9.0-10.0,118095872,944766976
20211023035258.026,10.244.4.138,51080,10.233.40.78,6789,3,0.0-10.0,1160249344,927107527

Oct 23 03:53:03.074: INFO: ExecWithOptions {Command:[/bin/sh -c iperf -v || true] Namespace:network-perf-9969 PodName:iperf2-clients-n8tll ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:03.074: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:03.259: INFO: Exec stderr: "iperf version 2.0.13 (21 Jan 2019) pthreads"
Oct 23 03:53:03.259: INFO: iperf version: 
Oct 23 03:53:03.259: INFO: attempting to run command 'iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5' in client pod iperf2-clients-n8tll (node node1)
Oct 23 03:53:03.261: INFO: ExecWithOptions {Command:[/bin/sh -c iperf  -e -p 6789 --reportstyle C -i 1 -c iperf2-server && sleep 5] Namespace:network-perf-9969 PodName:iperf2-clients-n8tll ContainerName:iperf2-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:03.261: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:18.726: INFO: Exec stderr: ""
Oct 23 03:53:18.726: INFO: output from exec on client pod iperf2-clients-n8tll (node node1): 
20211023035304.704,10.244.3.206,34870,10.233.40.78,6789,3,0.0-1.0,1889009664,15112077312
20211023035305.715,10.244.3.206,34870,10.233.40.78,6789,3,1.0-2.0,1905524736,15244197888
20211023035306.693,10.244.3.206,34870,10.233.40.78,6789,3,2.0-3.0,1909587968,15276703744
20211023035307.699,10.244.3.206,34870,10.233.40.78,6789,3,3.0-4.0,2000420864,16003366912
20211023035308.693,10.244.3.206,34870,10.233.40.78,6789,3,4.0-5.0,1990721536,15925772288
20211023035309.704,10.244.3.206,34870,10.233.40.78,6789,3,5.0-6.0,1895301120,15162408960
20211023035310.716,10.244.3.206,34870,10.233.40.78,6789,3,6.0-7.0,1932263424,15458107392
20211023035311.711,10.244.3.206,34870,10.233.40.78,6789,3,7.0-8.0,1816788992,14534311936
20211023035312.700,10.244.3.206,34870,10.233.40.78,6789,3,8.0-9.0,1896742912,15173943296
20211023035313.693,10.244.3.206,34870,10.233.40.78,6789,3,9.0-10.0,1876164608,15009316864
20211023035313.693,10.244.3.206,34870,10.233.40.78,6789,3,0.0-10.0,19112525824,15289964086

Oct 23 03:53:18.726: INFO:                                From                                 To    Bandwidth (MB/s)
Oct 23 03:53:18.726: INFO:                               node2                              node1                 111
Oct 23 03:53:18.726: INFO:                               node1                              node1                1823
[AfterEach] [sig-network] Networking IPerf2 [Feature:Networking-Performance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:18.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "network-perf-9969" for this suite.


• [SLOW TEST:41.023 seconds]
[sig-network] Networking IPerf2 [Feature:Networking-Performance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should run iperf2
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking_perf.go:188
------------------------------
{"msg":"PASSED [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2","total":-1,"completed":3,"skipped":572,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:18.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename firewall-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:61
Oct 23 03:53:18.897: INFO: Only supported for providers [gce] (not local)
[AfterEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:18.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "firewall-test-7769" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds]
[sig-network] Firewall rule
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  control plane should not expose well-known ports [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:214

  Only supported for providers [gce] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:51:49.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W1023 03:51:49.245929      28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 23 03:51:49.246: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 23 03:51:49.247: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
STEP: creating service-disabled in namespace services-172
STEP: creating service service-proxy-disabled in namespace services-172
STEP: creating replication controller service-proxy-disabled in namespace services-172
I1023 03:51:49.260959      28 runners.go:190] Created replication controller with name: service-proxy-disabled, namespace: services-172, replica count: 3
I1023 03:51:52.312414      28 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:51:55.314399      28 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:51:58.316045      28 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:01.318455      28 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:04.319127      28 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:07.320177      28 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:10.322639      28 runners.go:190] service-proxy-disabled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-172
STEP: creating service service-proxy-toggled in namespace services-172
STEP: creating replication controller service-proxy-toggled in namespace services-172
I1023 03:52:10.335398      28 runners.go:190] Created replication controller with name: service-proxy-toggled, namespace: services-172, replica count: 3
I1023 03:52:13.386998      28 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:16.387718      28 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:19.388235      28 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:22.389235      28 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:25.390218      28 runners.go:190] service-proxy-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Oct 23 03:52:25.392: INFO: Creating new host exec pod
Oct 23 03:52:25.409: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:27.413: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:29.412: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:31.413: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:33.412: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 23 03:52:33.412: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 23 03:52:37.426: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.14.62:80 2>&1 || true; echo; done" in pod services-172/verify-service-up-host-exec-pod
Oct 23 03:52:37.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-172 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.14.62:80 2>&1 || true; echo; done'
Oct 23 03:52:38.242: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n"
Oct 23 03:52:38.242: INFO: stdout: "service-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\n"
Oct 23 03:52:38.243: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.14.62:80 2>&1 || true; echo; done" in pod services-172/verify-service-up-exec-pod-x5kg2
Oct 23 03:52:38.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-172 exec verify-service-up-exec-pod-x5kg2 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.14.62:80 2>&1 || true; echo; done'
Oct 23 03:52:38.696: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n"
Oct 23 03:52:38.696: INFO: stdout: "service-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-172
STEP: Deleting pod verify-service-up-exec-pod-x5kg2 in namespace services-172
STEP: verifying service-disabled is not up
Oct 23 03:52:38.710: INFO: Creating new host exec pod
Oct 23 03:52:38.722: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:40.726: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:42.725: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 23 03:52:42.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-172 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.13.185:80 && echo service-down-failed'
Oct 23 03:52:45.025: INFO: rc: 28
Oct 23 03:52:45.025: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.13.185:80 && echo service-down-failed" in pod services-172/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-172 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.13.185:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.13.185:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-172
STEP: adding service-proxy-name label
STEP: verifying service is not up
Oct 23 03:52:45.043: INFO: Creating new host exec pod
Oct 23 03:52:45.054: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:47.058: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:49.057: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:51.060: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:53.057: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:55.059: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:57.059: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:59.059: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:01.060: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:03.060: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 23 03:53:03.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-172 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.14.62:80 && echo service-down-failed'
Oct 23 03:53:05.765: INFO: rc: 28
Oct 23 03:53:05.765: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.14.62:80 && echo service-down-failed" in pod services-172/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-172 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.14.62:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.14.62:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-172
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Oct 23 03:53:05.786: INFO: Creating new host exec pod
Oct 23 03:53:05.799: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:07.802: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 23 03:53:07.803: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 23 03:53:11.820: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.14.62:80 2>&1 || true; echo; done" in pod services-172/verify-service-up-host-exec-pod
Oct 23 03:53:11.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-172 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.14.62:80 2>&1 || true; echo; done'
Oct 23 03:53:12.851: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n"
Oct 23 03:53:12.852: INFO: stdout: "service-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\n"
Oct 23 03:53:12.852: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.14.62:80 2>&1 || true; echo; done" in pod services-172/verify-service-up-exec-pod-kmcmr
Oct 23 03:53:12.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-172 exec verify-service-up-exec-pod-kmcmr -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.14.62:80 2>&1 || true; echo; done'
Oct 23 03:53:13.552: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.14.62:80\n+ echo\n"
Oct 23 03:53:13.553: INFO: stdout: "service-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-lgpfh\nservice-proxy-toggled-qb57v\nservice-proxy-toggled-qmdfr\nservice-proxy-toggled-qb57v\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-172
STEP: Deleting pod verify-service-up-exec-pod-kmcmr in namespace services-172
STEP: verifying service-disabled is still not up
Oct 23 03:53:13.570: INFO: Creating new host exec pod
Oct 23 03:53:13.581: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:15.585: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:17.585: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 23 03:53:17.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-172 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.13.185:80 && echo service-down-failed'
Oct 23 03:53:20.092: INFO: rc: 28
Oct 23 03:53:20.092: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.13.185:80 && echo service-down-failed" in pod services-172/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-172 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.13.185:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.13.185:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-172
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:20.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-172" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:90.896 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":1,"skipped":153,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:51:49.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
Oct 23 03:51:49.631: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:51.635: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:53.635: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:55.636: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:57.634: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:51:59.635: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:01.636: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:03.636: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:05.636: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:07.634: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:09.636: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:11.635: INFO: The status of Pod boom-server is Running (Ready = true)
STEP: Server pod created on node node2
STEP: Server service created
Oct 23 03:52:11.655: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:13.660: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:15.660: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:17.659: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:19.659: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:21.660: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:23.660: INFO: The status of Pod startup-script is Running (Ready = true)
STEP: Client pod created
STEP: checking client pod does not RST the TCP connection because it receives and INVALID packet
Oct 23 03:53:23.816: INFO: boom-server pod logs: 2021/10/23 03:52:06 external ip: 10.244.4.121
2021/10/23 03:52:06 listen on 0.0.0.0:9000
2021/10/23 03:52:06 probing 10.244.4.121
2021/10/23 03:52:23 tcp packet: &{SrcPort:38258 DestPort:9000 Seq:3482240015 Ack:0 Flags:40962 WindowSize:29200 Checksum:43114 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:23 tcp packet: &{SrcPort:38258 DestPort:9000 Seq:3482240016 Ack:1834880190 Flags:32784 WindowSize:229 Checksum:2882 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:23 connection established
2021/10/23 03:52:23 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 149 114 109 92 134 30 207 142 196 16 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:23 checksumer: &{sum:455835 oddByte:33 length:39}
2021/10/23 03:52:23 ret:  455868
2021/10/23 03:52:23 ret:  62658
2021/10/23 03:52:23 ret:  62658
2021/10/23 03:52:23 boom packet injected
2021/10/23 03:52:23 tcp packet: &{SrcPort:38258 DestPort:9000 Seq:3482240016 Ack:1834880190 Flags:32785 WindowSize:229 Checksum:2881 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:25 tcp packet: &{SrcPort:44933 DestPort:9000 Seq:2150246586 Ack:0 Flags:40962 WindowSize:29200 Checksum:28993 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:25 tcp packet: &{SrcPort:44933 DestPort:9000 Seq:2150246587 Ack:1558996730 Flags:32784 WindowSize:229 Checksum:33406 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:25 connection established
2021/10/23 03:52:25 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 175 133 92 234 224 90 128 42 40 187 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:25 checksumer: &{sum:530451 oddByte:33 length:39}
2021/10/23 03:52:25 ret:  530484
2021/10/23 03:52:25 ret:  6204
2021/10/23 03:52:25 ret:  6204
2021/10/23 03:52:25 boom packet injected
2021/10/23 03:52:25 tcp packet: &{SrcPort:44933 DestPort:9000 Seq:2150246587 Ack:1558996730 Flags:32785 WindowSize:229 Checksum:33405 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:27 tcp packet: &{SrcPort:38967 DestPort:9000 Seq:1621443236 Ack:0 Flags:40962 WindowSize:29200 Checksum:34392 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:27 tcp packet: &{SrcPort:38967 DestPort:9000 Seq:1621443237 Ack:73328994 Flags:32784 WindowSize:229 Checksum:26090 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:27 connection established
2021/10/23 03:52:27 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 152 55 4 93 98 194 96 165 66 165 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:27 checksumer: &{sum:526624 oddByte:33 length:39}
2021/10/23 03:52:27 ret:  526657
2021/10/23 03:52:27 ret:  2377
2021/10/23 03:52:27 ret:  2377
2021/10/23 03:52:27 boom packet injected
2021/10/23 03:52:27 tcp packet: &{SrcPort:38967 DestPort:9000 Seq:1621443237 Ack:73328994 Flags:32785 WindowSize:229 Checksum:26089 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:29 tcp packet: &{SrcPort:40596 DestPort:9000 Seq:665502330 Ack:0 Flags:40962 WindowSize:29200 Checksum:12624 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:29 tcp packet: &{SrcPort:40596 DestPort:9000 Seq:665502331 Ack:1200834290 Flags:32784 WindowSize:229 Checksum:26699 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:29 connection established
2021/10/23 03:52:29 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 158 148 71 145 192 82 39 170 194 123 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:29 checksumer: &{sum:525838 oddByte:33 length:39}
2021/10/23 03:52:29 ret:  525871
2021/10/23 03:52:29 ret:  1591
2021/10/23 03:52:29 ret:  1591
2021/10/23 03:52:29 boom packet injected
2021/10/23 03:52:29 tcp packet: &{SrcPort:40596 DestPort:9000 Seq:665502331 Ack:1200834290 Flags:32785 WindowSize:229 Checksum:26698 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:31 tcp packet: &{SrcPort:43540 DestPort:9000 Seq:4104294634 Ack:0 Flags:40962 WindowSize:29200 Checksum:33430 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:31 tcp packet: &{SrcPort:43540 DestPort:9000 Seq:4104294635 Ack:1970270704 Flags:32784 WindowSize:229 Checksum:55527 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:31 connection established
2021/10/23 03:52:31 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 170 20 117 110 107 80 244 162 144 235 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:31 checksumer: &{sum:510350 oddByte:33 length:39}
2021/10/23 03:52:31 ret:  510383
2021/10/23 03:52:31 ret:  51638
2021/10/23 03:52:31 ret:  51638
2021/10/23 03:52:31 boom packet injected
2021/10/23 03:52:31 tcp packet: &{SrcPort:43540 DestPort:9000 Seq:4104294635 Ack:1970270704 Flags:32785 WindowSize:229 Checksum:55526 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:33 tcp packet: &{SrcPort:38258 DestPort:9000 Seq:3482240017 Ack:1834880191 Flags:32784 WindowSize:229 Checksum:48414 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:33 tcp packet: &{SrcPort:43112 DestPort:9000 Seq:2290576965 Ack:0 Flags:40962 WindowSize:29200 Checksum:2866 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:33 tcp packet: &{SrcPort:43112 DestPort:9000 Seq:2290576966 Ack:1913806463 Flags:32784 WindowSize:229 Checksum:61567 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:33 connection established
2021/10/23 03:52:33 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 168 104 114 16 215 223 136 135 110 70 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:33 checksumer: &{sum:495207 oddByte:33 length:39}
2021/10/23 03:52:33 ret:  495240
2021/10/23 03:52:33 ret:  36495
2021/10/23 03:52:33 ret:  36495
2021/10/23 03:52:33 boom packet injected
2021/10/23 03:52:33 tcp packet: &{SrcPort:43112 DestPort:9000 Seq:2290576966 Ack:1913806463 Flags:32785 WindowSize:229 Checksum:61566 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:35 tcp packet: &{SrcPort:44933 DestPort:9000 Seq:2150246588 Ack:1558996731 Flags:32784 WindowSize:229 Checksum:13403 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:35 tcp packet: &{SrcPort:39694 DestPort:9000 Seq:4182428891 Ack:0 Flags:40962 WindowSize:29200 Checksum:16738 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:35 tcp packet: &{SrcPort:39694 DestPort:9000 Seq:4182428892 Ack:1886461913 Flags:32784 WindowSize:229 Checksum:24359 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:35 connection established
2021/10/23 03:52:35 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 155 14 112 111 153 57 249 74 204 220 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:35 checksumer: &{sum:476905 oddByte:33 length:39}
2021/10/23 03:52:35 ret:  476938
2021/10/23 03:52:35 ret:  18193
2021/10/23 03:52:35 ret:  18193
2021/10/23 03:52:35 boom packet injected
2021/10/23 03:52:35 tcp packet: &{SrcPort:39694 DestPort:9000 Seq:4182428892 Ack:1886461913 Flags:32785 WindowSize:229 Checksum:24358 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:37 tcp packet: &{SrcPort:38967 DestPort:9000 Seq:1621443238 Ack:73328995 Flags:32784 WindowSize:229 Checksum:6087 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:37 tcp packet: &{SrcPort:38085 DestPort:9000 Seq:652008221 Ack:0 Flags:40962 WindowSize:29200 Checksum:1031 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:37 tcp packet: &{SrcPort:38085 DestPort:9000 Seq:652008222 Ack:3973823673 Flags:32784 WindowSize:229 Checksum:176 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:37 connection established
2021/10/23 03:52:37 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 148 197 236 218 54 25 38 220 219 30 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:37 checksumer: &{sum:531511 oddByte:33 length:39}
2021/10/23 03:52:37 ret:  531544
2021/10/23 03:52:37 ret:  7264
2021/10/23 03:52:37 ret:  7264
2021/10/23 03:52:37 boom packet injected
2021/10/23 03:52:37 tcp packet: &{SrcPort:38085 DestPort:9000 Seq:652008222 Ack:3973823673 Flags:32785 WindowSize:229 Checksum:175 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:39 tcp packet: &{SrcPort:40596 DestPort:9000 Seq:665502332 Ack:1200834291 Flags:32784 WindowSize:229 Checksum:6696 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:39 tcp packet: &{SrcPort:33914 DestPort:9000 Seq:3933624226 Ack:0 Flags:40962 WindowSize:29200 Checksum:52321 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:39 tcp packet: &{SrcPort:33914 DestPort:9000 Seq:3933624227 Ack:1273973526 Flags:32784 WindowSize:229 Checksum:54218 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:39 connection established
2021/10/23 03:52:39 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 132 122 75 237 196 118 234 118 87 163 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:39 checksumer: &{sum:548948 oddByte:33 length:39}
2021/10/23 03:52:39 ret:  548981
2021/10/23 03:52:39 ret:  24701
2021/10/23 03:52:39 ret:  24701
2021/10/23 03:52:39 boom packet injected
2021/10/23 03:52:39 tcp packet: &{SrcPort:33914 DestPort:9000 Seq:3933624227 Ack:1273973526 Flags:32785 WindowSize:229 Checksum:54217 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:41 tcp packet: &{SrcPort:43540 DestPort:9000 Seq:4104294636 Ack:1970270705 Flags:32784 WindowSize:229 Checksum:35523 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:41 tcp packet: &{SrcPort:34640 DestPort:9000 Seq:1056067617 Ack:0 Flags:40962 WindowSize:29200 Checksum:29889 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:41 tcp packet: &{SrcPort:34640 DestPort:9000 Seq:1056067618 Ack:12586050 Flags:32784 WindowSize:229 Checksum:65117 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:41 connection established
2021/10/23 03:52:41 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 135 80 0 190 133 162 62 242 80 34 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:41 checksumer: &{sum:535834 oddByte:33 length:39}
2021/10/23 03:52:41 ret:  535867
2021/10/23 03:52:41 ret:  11587
2021/10/23 03:52:41 ret:  11587
2021/10/23 03:52:41 boom packet injected
2021/10/23 03:52:41 tcp packet: &{SrcPort:34640 DestPort:9000 Seq:1056067618 Ack:12586050 Flags:32785 WindowSize:229 Checksum:65116 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:43 tcp packet: &{SrcPort:43112 DestPort:9000 Seq:2290576967 Ack:1913806464 Flags:32784 WindowSize:229 Checksum:41564 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:43 tcp packet: &{SrcPort:43842 DestPort:9000 Seq:4192469369 Ack:0 Flags:40962 WindowSize:29200 Checksum:56499 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:43 tcp packet: &{SrcPort:43842 DestPort:9000 Seq:4192469370 Ack:2913814512 Flags:32784 WindowSize:229 Checksum:30179 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:43 connection established
2021/10/23 03:52:43 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 171 66 173 171 193 80 249 228 1 122 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:43 checksumer: &{sum:525715 oddByte:33 length:39}
2021/10/23 03:52:43 ret:  525748
2021/10/23 03:52:43 ret:  1468
2021/10/23 03:52:43 ret:  1468
2021/10/23 03:52:43 boom packet injected
2021/10/23 03:52:43 tcp packet: &{SrcPort:43842 DestPort:9000 Seq:4192469370 Ack:2913814512 Flags:32785 WindowSize:229 Checksum:30178 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:45 tcp packet: &{SrcPort:39694 DestPort:9000 Seq:4182428893 Ack:1886461914 Flags:32784 WindowSize:229 Checksum:4356 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:45 tcp packet: &{SrcPort:35878 DestPort:9000 Seq:2298782103 Ack:0 Flags:40962 WindowSize:29200 Checksum:50369 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:45 tcp packet: &{SrcPort:35878 DestPort:9000 Seq:2298782104 Ack:638709881 Flags:32784 WindowSize:229 Checksum:13618 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:45 connection established
2021/10/23 03:52:45 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 140 38 38 16 105 217 137 4 161 152 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:45 checksumer: &{sum:464069 oddByte:33 length:39}
2021/10/23 03:52:45 ret:  464102
2021/10/23 03:52:45 ret:  5357
2021/10/23 03:52:45 ret:  5357
2021/10/23 03:52:45 boom packet injected
2021/10/23 03:52:45 tcp packet: &{SrcPort:35878 DestPort:9000 Seq:2298782104 Ack:638709881 Flags:32785 WindowSize:229 Checksum:13617 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:47 tcp packet: &{SrcPort:38085 DestPort:9000 Seq:652008223 Ack:3973823674 Flags:32784 WindowSize:229 Checksum:45709 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:47 tcp packet: &{SrcPort:37410 DestPort:9000 Seq:3610352758 Ack:0 Flags:40962 WindowSize:29200 Checksum:28136 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:47 tcp packet: &{SrcPort:37410 DestPort:9000 Seq:3610352759 Ack:4156121256 Flags:32784 WindowSize:229 Checksum:38066 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:47 connection established
2021/10/23 03:52:47 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 146 34 247 183 218 8 215 49 156 119 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:47 checksumer: &{sum:455766 oddByte:33 length:39}
2021/10/23 03:52:47 ret:  455799
2021/10/23 03:52:47 ret:  62589
2021/10/23 03:52:47 ret:  62589
2021/10/23 03:52:47 boom packet injected
2021/10/23 03:52:47 tcp packet: &{SrcPort:37410 DestPort:9000 Seq:3610352759 Ack:4156121256 Flags:32785 WindowSize:229 Checksum:38065 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:49 tcp packet: &{SrcPort:33914 DestPort:9000 Seq:3933624228 Ack:1273973527 Flags:32784 WindowSize:229 Checksum:34214 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:49 tcp packet: &{SrcPort:34601 DestPort:9000 Seq:1151810521 Ack:0 Flags:40962 WindowSize:29200 Checksum:25655 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:49 tcp packet: &{SrcPort:34601 DestPort:9000 Seq:1151810522 Ack:320556056 Flags:32784 WindowSize:229 Checksum:31837 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:49 connection established
2021/10/23 03:52:49 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 135 41 19 25 197 120 68 167 59 218 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:49 checksumer: &{sum:500830 oddByte:33 length:39}
2021/10/23 03:52:49 ret:  500863
2021/10/23 03:52:49 ret:  42118
2021/10/23 03:52:49 ret:  42118
2021/10/23 03:52:49 boom packet injected
2021/10/23 03:52:49 tcp packet: &{SrcPort:34601 DestPort:9000 Seq:1151810522 Ack:320556056 Flags:32785 WindowSize:229 Checksum:31836 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:51 tcp packet: &{SrcPort:34640 DestPort:9000 Seq:1056067619 Ack:12586051 Flags:32784 WindowSize:229 Checksum:45111 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:51 tcp packet: &{SrcPort:40264 DestPort:9000 Seq:1762409822 Ack:0 Flags:40962 WindowSize:29200 Checksum:8278 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:51 tcp packet: &{SrcPort:40264 DestPort:9000 Seq:1762409823 Ack:2230700472 Flags:32784 WindowSize:229 Checksum:16677 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:51 connection established
2021/10/23 03:52:51 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 157 72 132 244 67 24 105 12 61 95 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:51 checksumer: &{sum:469130 oddByte:33 length:39}
2021/10/23 03:52:51 ret:  469163
2021/10/23 03:52:51 ret:  10418
2021/10/23 03:52:51 ret:  10418
2021/10/23 03:52:51 boom packet injected
2021/10/23 03:52:51 tcp packet: &{SrcPort:40264 DestPort:9000 Seq:1762409823 Ack:2230700472 Flags:32785 WindowSize:229 Checksum:16675 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:53 tcp packet: &{SrcPort:43842 DestPort:9000 Seq:4192469371 Ack:2913814513 Flags:32784 WindowSize:229 Checksum:10175 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:53 tcp packet: &{SrcPort:43735 DestPort:9000 Seq:2007121166 Ack:0 Flags:40962 WindowSize:29200 Checksum:64695 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:53 tcp packet: &{SrcPort:43735 DestPort:9000 Seq:2007121167 Ack:2162509864 Flags:32784 WindowSize:229 Checksum:39772 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:53 connection established
2021/10/23 03:52:53 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 170 215 128 227 193 136 119 162 61 15 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:53 checksumer: &{sum:548127 oddByte:33 length:39}
2021/10/23 03:52:53 ret:  548160
2021/10/23 03:52:53 ret:  23880
2021/10/23 03:52:53 ret:  23880
2021/10/23 03:52:53 boom packet injected
2021/10/23 03:52:53 tcp packet: &{SrcPort:43735 DestPort:9000 Seq:2007121167 Ack:2162509864 Flags:32785 WindowSize:229 Checksum:39771 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:55 tcp packet: &{SrcPort:35878 DestPort:9000 Seq:2298782105 Ack:638709882 Flags:32784 WindowSize:229 Checksum:59149 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:55 tcp packet: &{SrcPort:32917 DestPort:9000 Seq:2271382493 Ack:0 Flags:40962 WindowSize:29200 Checksum:49302 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:55 tcp packet: &{SrcPort:32917 DestPort:9000 Seq:2271382494 Ack:4016358773 Flags:32784 WindowSize:229 Checksum:27549 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:55 connection established
2021/10/23 03:52:55 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 128 149 239 99 62 213 135 98 139 222 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:55 checksumer: &{sum:554815 oddByte:33 length:39}
2021/10/23 03:52:55 ret:  554848
2021/10/23 03:52:55 ret:  30568
2021/10/23 03:52:55 ret:  30568
2021/10/23 03:52:55 boom packet injected
2021/10/23 03:52:55 tcp packet: &{SrcPort:32917 DestPort:9000 Seq:2271382494 Ack:4016358773 Flags:32785 WindowSize:229 Checksum:27548 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:57 tcp packet: &{SrcPort:37410 DestPort:9000 Seq:3610352760 Ack:4156121257 Flags:32784 WindowSize:229 Checksum:18055 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:57 tcp packet: &{SrcPort:45449 DestPort:9000 Seq:2406269113 Ack:0 Flags:40962 WindowSize:29200 Checksum:19170 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:57 tcp packet: &{SrcPort:45449 DestPort:9000 Seq:2406269114 Ack:1888998245 Flags:32784 WindowSize:229 Checksum:24296 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:57 connection established
2021/10/23 03:52:57 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 177 137 112 150 76 197 143 108 192 186 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:57 checksumer: &{sum:554044 oddByte:33 length:39}
2021/10/23 03:52:57 ret:  554077
2021/10/23 03:52:57 ret:  29797
2021/10/23 03:52:57 ret:  29797
2021/10/23 03:52:57 boom packet injected
2021/10/23 03:52:57 tcp packet: &{SrcPort:45449 DestPort:9000 Seq:2406269114 Ack:1888998245 Flags:32785 WindowSize:229 Checksum:24295 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:59 tcp packet: &{SrcPort:34601 DestPort:9000 Seq:1151810523 Ack:320556057 Flags:32784 WindowSize:229 Checksum:11832 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:59 tcp packet: &{SrcPort:38172 DestPort:9000 Seq:1403422771 Ack:0 Flags:40962 WindowSize:29200 Checksum:54231 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:52:59 tcp packet: &{SrcPort:38172 DestPort:9000 Seq:1403422772 Ack:1027611210 Flags:32784 WindowSize:229 Checksum:52372 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:52:59 connection established
2021/10/23 03:52:59 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 149 28 61 62 147 170 83 166 136 52 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:52:59 checksumer: &{sum:477120 oddByte:33 length:39}
2021/10/23 03:52:59 ret:  477153
2021/10/23 03:52:59 ret:  18408
2021/10/23 03:52:59 ret:  18408
2021/10/23 03:52:59 boom packet injected
2021/10/23 03:52:59 tcp packet: &{SrcPort:38172 DestPort:9000 Seq:1403422772 Ack:1027611210 Flags:32785 WindowSize:229 Checksum:52371 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:01 tcp packet: &{SrcPort:40223 DestPort:9000 Seq:1626565499 Ack:0 Flags:40962 WindowSize:29200 Checksum:54126 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:53:01 tcp packet: &{SrcPort:40223 DestPort:9000 Seq:1626565500 Ack:889552426 Flags:32784 WindowSize:229 Checksum:26804 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:01 connection established
2021/10/23 03:53:01 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 157 31 53 3 247 138 96 243 107 124 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:53:01 checksumer: &{sum:492820 oddByte:33 length:39}
2021/10/23 03:53:01 ret:  492853
2021/10/23 03:53:01 ret:  34108
2021/10/23 03:53:01 ret:  34108
2021/10/23 03:53:01 boom packet injected
2021/10/23 03:53:01 tcp packet: &{SrcPort:40223 DestPort:9000 Seq:1626565500 Ack:889552426 Flags:32785 WindowSize:229 Checksum:26803 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:01 tcp packet: &{SrcPort:40264 DestPort:9000 Seq:1762409824 Ack:2230700473 Flags:32784 WindowSize:229 Checksum:62204 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:03 tcp packet: &{SrcPort:40402 DestPort:9000 Seq:2019506313 Ack:0 Flags:40962 WindowSize:29200 Checksum:58993 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:53:03 tcp packet: &{SrcPort:40402 DestPort:9000 Seq:2019506314 Ack:2148111561 Flags:32784 WindowSize:229 Checksum:4677 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:03 connection established
2021/10/23 03:53:03 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 157 210 128 8 14 41 120 95 56 138 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:53:03 checksumer: &{sum:480603 oddByte:33 length:39}
2021/10/23 03:53:03 ret:  480636
2021/10/23 03:53:03 ret:  21891
2021/10/23 03:53:03 ret:  21891
2021/10/23 03:53:03 boom packet injected
2021/10/23 03:53:03 tcp packet: &{SrcPort:40402 DestPort:9000 Seq:2019506314 Ack:2148111561 Flags:32785 WindowSize:229 Checksum:4676 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:03 tcp packet: &{SrcPort:43735 DestPort:9000 Seq:2007121168 Ack:2162509865 Flags:32784 WindowSize:229 Checksum:19763 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:05 tcp packet: &{SrcPort:34631 DestPort:9000 Seq:1410299495 Ack:0 Flags:40962 WindowSize:29200 Checksum:56222 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:53:05 tcp packet: &{SrcPort:34631 DestPort:9000 Seq:1410299496 Ack:2516813274 Flags:32784 WindowSize:229 Checksum:63637 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:05 connection established
2021/10/23 03:53:05 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 135 71 150 1 255 58 84 15 118 104 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:53:05 checksumer: &{sum:418662 oddByte:33 length:39}
2021/10/23 03:53:05 ret:  418695
2021/10/23 03:53:05 ret:  25485
2021/10/23 03:53:05 ret:  25485
2021/10/23 03:53:05 boom packet injected
2021/10/23 03:53:05 tcp packet: &{SrcPort:34631 DestPort:9000 Seq:1410299496 Ack:2516813274 Flags:32785 WindowSize:229 Checksum:63636 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:05 tcp packet: &{SrcPort:32917 DestPort:9000 Seq:2271382495 Ack:4016358774 Flags:32784 WindowSize:229 Checksum:7544 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:07 tcp packet: &{SrcPort:36713 DestPort:9000 Seq:494377030 Ack:0 Flags:40962 WindowSize:29200 Checksum:57444 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:53:07 tcp packet: &{SrcPort:36713 DestPort:9000 Seq:494377031 Ack:2381161436 Flags:32784 WindowSize:229 Checksum:57247 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:07 connection established
2021/10/23 03:53:07 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 143 105 141 236 29 60 29 119 152 71 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:53:07 checksumer: &{sum:505966 oddByte:33 length:39}
2021/10/23 03:53:07 ret:  505999
2021/10/23 03:53:07 ret:  47254
2021/10/23 03:53:07 ret:  47254
2021/10/23 03:53:07 boom packet injected
2021/10/23 03:53:07 tcp packet: &{SrcPort:36713 DestPort:9000 Seq:494377031 Ack:2381161436 Flags:32785 WindowSize:229 Checksum:57246 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:07 tcp packet: &{SrcPort:45449 DestPort:9000 Seq:2406269115 Ack:1888998246 Flags:32784 WindowSize:229 Checksum:4286 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:09 tcp packet: &{SrcPort:38172 DestPort:9000 Seq:1403422773 Ack:1027611211 Flags:32784 WindowSize:229 Checksum:32369 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:09 tcp packet: &{SrcPort:45465 DestPort:9000 Seq:2456443577 Ack:0 Flags:40962 WindowSize:29200 Checksum:32509 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:53:09 tcp packet: &{SrcPort:45465 DestPort:9000 Seq:2456443578 Ack:592927722 Flags:32784 WindowSize:229 Checksum:10480 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:09 connection established
2021/10/23 03:53:09 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 177 153 35 85 213 74 146 106 90 186 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:53:09 checksumer: &{sum:509461 oddByte:33 length:39}
2021/10/23 03:53:09 ret:  509494
2021/10/23 03:53:09 ret:  50749
2021/10/23 03:53:09 ret:  50749
2021/10/23 03:53:09 boom packet injected
2021/10/23 03:53:09 tcp packet: &{SrcPort:45465 DestPort:9000 Seq:2456443578 Ack:592927722 Flags:32785 WindowSize:229 Checksum:10479 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:11 tcp packet: &{SrcPort:40223 DestPort:9000 Seq:1626565501 Ack:889552427 Flags:32784 WindowSize:229 Checksum:6801 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:11 tcp packet: &{SrcPort:41519 DestPort:9000 Seq:670684996 Ack:0 Flags:40962 WindowSize:29200 Checksum:29820 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:53:11 tcp packet: &{SrcPort:41519 DestPort:9000 Seq:670684997 Ack:1748359311 Flags:32784 WindowSize:229 Checksum:21785 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:11 connection established
2021/10/23 03:53:11 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 162 47 104 52 81 239 39 249 215 69 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:53:11 checksumer: &{sum:522713 oddByte:33 length:39}
2021/10/23 03:53:11 ret:  522746
2021/10/23 03:53:11 ret:  64001
2021/10/23 03:53:11 ret:  64001
2021/10/23 03:53:11 boom packet injected
2021/10/23 03:53:11 tcp packet: &{SrcPort:41519 DestPort:9000 Seq:670684997 Ack:1748359311 Flags:32785 WindowSize:229 Checksum:21784 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:13 tcp packet: &{SrcPort:40402 DestPort:9000 Seq:2019506315 Ack:2148111562 Flags:32784 WindowSize:229 Checksum:50209 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:13 tcp packet: &{SrcPort:43964 DestPort:9000 Seq:1159719312 Ack:0 Flags:40962 WindowSize:29200 Checksum:13229 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:53:13 tcp packet: &{SrcPort:43964 DestPort:9000 Seq:1159719313 Ack:1626734375 Flags:32784 WindowSize:229 Checksum:60706 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:13 connection established
2021/10/23 03:53:13 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 171 188 96 244 120 135 69 31 233 145 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:53:13 checksumer: &{sum:545073 oddByte:33 length:39}
2021/10/23 03:53:13 ret:  545106
2021/10/23 03:53:13 ret:  20826
2021/10/23 03:53:13 ret:  20826
2021/10/23 03:53:13 boom packet injected
2021/10/23 03:53:13 tcp packet: &{SrcPort:43964 DestPort:9000 Seq:1159719313 Ack:1626734375 Flags:32785 WindowSize:229 Checksum:60705 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:15 tcp packet: &{SrcPort:34631 DestPort:9000 Seq:1410299497 Ack:2516813275 Flags:32784 WindowSize:229 Checksum:43634 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:15 tcp packet: &{SrcPort:38639 DestPort:9000 Seq:2428656688 Ack:0 Flags:40962 WindowSize:29200 Checksum:33383 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:53:15 tcp packet: &{SrcPort:38639 DestPort:9000 Seq:2428656689 Ack:2149089892 Flags:32784 WindowSize:229 Checksum:37292 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:15 connection established
2021/10/23 03:53:15 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 150 239 128 22 251 196 144 194 92 49 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:53:15 checksumer: &{sum:534141 oddByte:33 length:39}
2021/10/23 03:53:15 ret:  534174
2021/10/23 03:53:15 ret:  9894
2021/10/23 03:53:15 ret:  9894
2021/10/23 03:53:15 boom packet injected
2021/10/23 03:53:15 tcp packet: &{SrcPort:38639 DestPort:9000 Seq:2428656689 Ack:2149089892 Flags:32785 WindowSize:229 Checksum:37291 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:17 tcp packet: &{SrcPort:36713 DestPort:9000 Seq:494377032 Ack:2381161437 Flags:32784 WindowSize:229 Checksum:37243 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:17 tcp packet: &{SrcPort:45257 DestPort:9000 Seq:2208075435 Ack:0 Flags:40962 WindowSize:29200 Checksum:15207 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:53:17 tcp packet: &{SrcPort:45257 DestPort:9000 Seq:2208075436 Ack:2089639759 Flags:32784 WindowSize:229 Checksum:27002 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:17 connection established
2021/10/23 03:53:17 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 176 201 124 139 216 175 131 156 142 172 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:53:17 checksumer: &{sum:570773 oddByte:33 length:39}
2021/10/23 03:53:17 ret:  570806
2021/10/23 03:53:17 ret:  46526
2021/10/23 03:53:17 ret:  46526
2021/10/23 03:53:17 boom packet injected
2021/10/23 03:53:17 tcp packet: &{SrcPort:45257 DestPort:9000 Seq:2208075436 Ack:2089639759 Flags:32785 WindowSize:229 Checksum:27001 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:19 tcp packet: &{SrcPort:45465 DestPort:9000 Seq:2456443579 Ack:592927723 Flags:32784 WindowSize:229 Checksum:56012 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:19 tcp packet: &{SrcPort:35414 DestPort:9000 Seq:3460967025 Ack:0 Flags:40962 WindowSize:29200 Checksum:29589 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:53:19 tcp packet: &{SrcPort:35414 DestPort:9000 Seq:3460967026 Ack:4235327700 Flags:32784 WindowSize:229 Checksum:32878 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:19 connection established
2021/10/23 03:53:19 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 138 86 252 112 114 52 206 74 42 114 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:53:19 checksumer: &{sum:467056 oddByte:33 length:39}
2021/10/23 03:53:19 ret:  467089
2021/10/23 03:53:19 ret:  8344
2021/10/23 03:53:19 ret:  8344
2021/10/23 03:53:19 boom packet injected
2021/10/23 03:53:19 tcp packet: &{SrcPort:35414 DestPort:9000 Seq:3460967026 Ack:4235327700 Flags:32785 WindowSize:229 Checksum:32877 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:21 tcp packet: &{SrcPort:41519 DestPort:9000 Seq:670684998 Ack:1748359312 Flags:32784 WindowSize:229 Checksum:1782 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:21 tcp packet: &{SrcPort:40279 DestPort:9000 Seq:1137988762 Ack:0 Flags:40962 WindowSize:29200 Checksum:47377 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:53:21 tcp packet: &{SrcPort:40279 DestPort:9000 Seq:1137988763 Ack:2721788097 Flags:32784 WindowSize:229 Checksum:57444 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:21 connection established
2021/10/23 03:53:21 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 157 87 162 57 170 33 67 212 84 155 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:53:21 checksumer: &{sum:494080 oddByte:33 length:39}
2021/10/23 03:53:21 ret:  494113
2021/10/23 03:53:21 ret:  35368
2021/10/23 03:53:21 ret:  35368
2021/10/23 03:53:21 boom packet injected
2021/10/23 03:53:21 tcp packet: &{SrcPort:40279 DestPort:9000 Seq:1137988763 Ack:2721788097 Flags:32785 WindowSize:229 Checksum:57443 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:23 tcp packet: &{SrcPort:43964 DestPort:9000 Seq:1159719314 Ack:1626734376 Flags:32784 WindowSize:229 Checksum:40702 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:23 tcp packet: &{SrcPort:42932 DestPort:9000 Seq:2108545957 Ack:0 Flags:40962 WindowSize:29200 Checksum:58877 UrgentPtr:0}, flag: SYN , data: [], addr: 10.244.3.194
2021/10/23 03:53:23 tcp packet: &{SrcPort:42932 DestPort:9000 Seq:2108545958 Ack:2011260959 Flags:32784 WindowSize:229 Checksum:63611 UrgentPtr:0}, flag: ACK , data: [], addr: 10.244.3.194
2021/10/23 03:53:23 connection established
2021/10/23 03:53:23 calling checksumTCP: 10.244.4.121 10.244.3.194 [35 40 167 180 119 223 225 127 125 173 219 166 80 24 0 229 0 0 0 0] [98 111 111 109 33 33 33]
2021/10/23 03:53:23 checksumer: &{sum:577495 oddByte:33 length:39}
2021/10/23 03:53:23 ret:  577528
2021/10/23 03:53:23 ret:  53248
2021/10/23 03:53:23 ret:  53248
2021/10/23 03:53:23 boom packet injected
2021/10/23 03:53:23 tcp packet: &{SrcPort:42932 DestPort:9000 Seq:2108545958 Ack:2011260959 Flags:32785 WindowSize:229 Checksum:63610 UrgentPtr:0}, flag: FIN ACK , data: [], addr: 10.244.3.194

Oct 23 03:53:23.816: INFO: boom-server OK: did not receive any RST packet
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:23.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-8261" for this suite.


• [SLOW TEST:94.232 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":2,"skipped":282,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:04.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should support basic nodePort: udp functionality
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387
STEP: Performing setup for networking test in namespace nettest-6903
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:53:04.335: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:04.366: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:06.370: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:08.375: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:10.373: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:12.370: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:14.371: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:16.372: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:18.371: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:20.370: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:22.370: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:24.370: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:53:24.376: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:53:26.381: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:53:30.418: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:53:30.419: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:30.426: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:30.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6903" for this suite.


S [SKIPPING] [26.219 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should support basic nodePort: udp functionality [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:387

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:55.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for node-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212
STEP: Performing setup for networking test in namespace nettest-9683
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:52:55.381: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:52:55.429: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:57.432: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:59.432: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:01.435: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:03.431: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:05.432: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:07.433: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:09.433: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:11.435: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:13.433: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:15.433: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:17.433: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:53:17.438: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:53:35.478: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:53:35.478: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:35.485: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:35.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-9683" for this suite.


S [SKIPPING] [40.222 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for node-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:212

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:02.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for client IP based session affinity: udp [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434
STEP: Performing setup for networking test in namespace nettest-6704
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:53:03.037: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:03.066: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:05.069: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:07.071: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:09.070: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:11.071: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:13.070: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:15.072: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:17.070: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:19.071: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:21.074: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:23.072: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:25.071: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:53:25.076: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:53:39.099: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:53:39.099: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:39.108: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:39.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-6704" for this suite.


S [SKIPPING] [36.194 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for client IP based session affinity: udp [LinuxOnly] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:434

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:39.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster [Provider:GCE]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68
Oct 23 03:53:39.547: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:39.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6281" for this suite.


S [SKIPPING] [0.035 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster [Provider:GCE] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:68

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:69
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:01.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242
STEP: Performing setup for networking test in namespace nettest-1662
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:53:01.912: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:01.943: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:03.946: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:05.949: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:07.948: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:09.948: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:11.947: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:13.946: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:15.948: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:17.947: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:19.948: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:21.947: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:23.947: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:53:23.951: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:53:39.971: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:53:39.971: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:39.978: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:39.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1662" for this suite.


S [SKIPPING] [38.190 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:242

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:40.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename esipp
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:858
Oct 23 03:53:40.097: INFO: Only supported for providers [gce gke] (not local)
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:40.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "esipp-7506" for this suite.
[AfterEach] [sig-network] ESIPP [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:866


S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds]
[sig-network] ESIPP [Slow]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should work for type=NodePort [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:927

  Only supported for providers [gce gke] (not local)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/loadbalancer.go:860
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:40.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename netpol
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support creating NetworkPolicy API operations
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/netpol/network_policy_api.go:48
STEP: getting /apis
STEP: getting /apis/networking.k8s.io
STEP: getting /apis/networking.k8s.iov1
STEP: creating
STEP: getting
STEP: listing
STEP: watching
Oct 23 03:53:40.427: INFO: starting watch
STEP: cluster-wide listing
STEP: cluster-wide watching
Oct 23 03:53:40.430: INFO: starting watch
STEP: patching
STEP: updating
Oct 23 03:53:40.437: INFO: waiting for watch events with expected annotations
Oct 23 03:53:40.437: INFO: missing expected annotations, waiting: map[string]string{"patched":"true"}
Oct 23 03:53:40.437: INFO: saw patched and updated annotations
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:40.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-7234" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":1,"skipped":434,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:12.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update nodePort: udp [Slow]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397
STEP: Performing setup for networking test in namespace nettest-706
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:53:13.111: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:13.144: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:15.148: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:17.148: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:19.148: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:21.148: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:23.149: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:25.149: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:27.148: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:29.148: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:31.149: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:33.149: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:35.150: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:53:35.155: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:53:43.191: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:53:43.191: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:43.199: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:43.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-706" for this suite.


S [SKIPPING] [30.215 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update nodePort: udp [Slow] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:397

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:35.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8327.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8327.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8327.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8327.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8327.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8327.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct 23 03:53:47.608: INFO: DNS probes using dns-8327/dns-test-4cd79b6c-f44d-4de3-89f0-8ea1138ebd81 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:47.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8327" for this suite.


• [SLOW TEST:12.107 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":3,"skipped":310,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:19.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kube-proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
Oct 23 03:53:19.106: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:21.111: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:23.109: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:25.109: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:27.109: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:29.112: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:31.110: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:33.109: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:35.112: INFO: The status of Pod e2e-net-exec is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:37.109: INFO: The status of Pod e2e-net-exec is Running (Ready = true)
STEP: Launching a server daemon on node node2 (node ip: 10.10.190.208, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Oct 23 03:53:37.128: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:39.130: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:41.133: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:43.134: INFO: The status of Pod e2e-net-server is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:45.131: INFO: The status of Pod e2e-net-server is Running (Ready = true)
STEP: Launching a client connection on node node1 (node ip: 10.10.190.207, image: k8s.gcr.io/e2e-test-images/agnhost:2.32)
Oct 23 03:53:47.152: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:49.155: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:51.160: INFO: The status of Pod e2e-net-client is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:53.156: INFO: The status of Pod e2e-net-client is Running (Ready = true)
STEP: Checking conntrack entries for the timeout
Oct 23 03:53:53.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kube-proxy-3301 exec e2e-net-exec -- /bin/sh -x -c conntrack -L -f ipv4 -d 10.10.190.208 | grep -m 1 'CLOSE_WAIT.*dport=11302' '
Oct 23 03:53:53.402: INFO: stderr: "+ grep -m 1 CLOSE_WAIT.*dport=11302\n+ conntrack -L -f ipv4 -d 10.10.190.208\nconntrack v1.4.5 (conntrack-tools): 6 flow entries have been shown.\n"
Oct 23 03:53:53.403: INFO: stdout: "tcp      6 3598 CLOSE_WAIT src=10.244.3.231 dst=10.10.190.208 sport=43700 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=26501 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1\n"
Oct 23 03:53:53.403: INFO: conntrack entry for node 10.10.190.208 and port 11302:  tcp      6 3598 CLOSE_WAIT src=10.244.3.231 dst=10.10.190.208 sport=43700 dport=11302 src=10.10.190.208 dst=10.10.190.207 sport=11302 dport=26501 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1

[AfterEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:53.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kube-proxy-3301" for this suite.


• [SLOW TEST:34.344 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":4,"skipped":733,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:20.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451
STEP: Performing setup for networking test in namespace nettest-5007
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:53:20.302: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:20.334: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:22.339: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:24.338: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:26.339: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:28.341: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:30.339: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:32.338: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:34.338: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:36.338: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:38.339: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:40.339: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:42.337: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:44.338: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:46.340: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:48.341: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:50.337: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:52.337: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:53:52.344: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:53:58.368: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:53:58.369: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:58.375: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:53:58.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5007" for this suite.


S [SKIPPING] [38.213 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: http [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:451

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:52:06.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should update endpoints: http
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
STEP: Performing setup for networking test in namespace nettest-3878
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:52:06.921: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:52:06.954: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:08.958: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:10.958: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:12.959: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:14.958: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:16.957: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:18.957: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:20.963: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:22.960: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:24.960: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:26.958: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:52:28.957: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:52:28.962: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:52:32.984: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:52:32.984: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating the service on top of the pods in kubernetes
Oct 23 03:52:33.016: INFO: Service node-port-service in namespace nettest-3878 found.
Oct 23 03:52:33.030: INFO: Service session-affinity-service in namespace nettest-3878 found.
STEP: Waiting for NodePort service to expose endpoint
Oct 23 03:52:34.032: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Oct 23 03:52:35.035: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(http) test-container-pod --> 10.233.59.208:80 (config.clusterIP)
Oct 23 03:52:35.040: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:52:35.040: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:52:35.436: INFO: Waiting for responses: map[netserver-1:{}]
Oct 23 03:52:37.440: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:52:37.440: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:52:37.851: INFO: Waiting for responses: map[netserver-1:{}]
Oct 23 03:52:39.854: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:52:39.854: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:52:40.397: INFO: Waiting for responses: map[]
Oct 23 03:52:40.397: INFO: reached 10.233.59.208 after 2/34 tries
STEP: Deleting a pod which, will be replaced with a new endpoint
Oct 23 03:52:40.404: INFO: Waiting for pod netserver-0 to disappear
Oct 23 03:52:40.407: INFO: Pod netserver-0 no longer exists
Oct 23 03:52:41.408: INFO: Waiting for amount of service:node-port-service endpoints to be 1
STEP: dialing(http) test-container-pod --> 10.233.59.208:80 (config.clusterIP) (endpoint recovery)
Oct 23 03:52:46.416: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:52:46.416: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:52:46.657: INFO: Waiting for responses: map[]
Oct 23 03:52:48.662: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:52:48.662: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:52:49.548: INFO: Waiting for responses: map[]
Oct 23 03:52:51.554: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:52:51.554: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:52:51.691: INFO: Waiting for responses: map[]
Oct 23 03:52:53.694: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:52:53.694: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:52:54.120: INFO: Waiting for responses: map[]
Oct 23 03:52:56.124: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:52:56.124: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:52:56.430: INFO: Waiting for responses: map[]
Oct 23 03:52:58.434: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:52:58.434: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:52:58.848: INFO: Waiting for responses: map[]
Oct 23 03:53:00.854: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:00.854: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:00.946: INFO: Waiting for responses: map[]
Oct 23 03:53:02.948: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:02.948: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:03.766: INFO: Waiting for responses: map[]
Oct 23 03:53:05.770: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:05.770: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:06.378: INFO: Waiting for responses: map[]
Oct 23 03:53:08.382: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:08.382: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:09.003: INFO: Waiting for responses: map[]
Oct 23 03:53:11.006: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:11.006: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:11.099: INFO: Waiting for responses: map[]
Oct 23 03:53:13.103: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:13.103: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:13.196: INFO: Waiting for responses: map[]
Oct 23 03:53:15.201: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:15.202: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:15.313: INFO: Waiting for responses: map[]
Oct 23 03:53:17.316: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:17.316: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:17.403: INFO: Waiting for responses: map[]
Oct 23 03:53:19.407: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:19.407: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:19.949: INFO: Waiting for responses: map[]
Oct 23 03:53:21.952: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:21.952: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:22.044: INFO: Waiting for responses: map[]
Oct 23 03:53:24.048: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:24.048: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:24.301: INFO: Waiting for responses: map[]
Oct 23 03:53:26.304: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:26.304: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:26.396: INFO: Waiting for responses: map[]
Oct 23 03:53:28.399: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:28.399: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:28.620: INFO: Waiting for responses: map[]
Oct 23 03:53:30.624: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:30.624: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:30.750: INFO: Waiting for responses: map[]
Oct 23 03:53:32.754: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:32.754: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:32.839: INFO: Waiting for responses: map[]
Oct 23 03:53:34.845: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:34.845: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:34.932: INFO: Waiting for responses: map[]
Oct 23 03:53:36.936: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:36.936: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:37.203: INFO: Waiting for responses: map[]
Oct 23 03:53:39.206: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:39.206: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:39.390: INFO: Waiting for responses: map[]
Oct 23 03:53:41.394: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:41.394: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:41.970: INFO: Waiting for responses: map[]
Oct 23 03:53:43.974: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:43.974: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:44.399: INFO: Waiting for responses: map[]
Oct 23 03:53:46.403: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:46.403: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:46.558: INFO: Waiting for responses: map[]
Oct 23 03:53:48.563: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:48.563: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:49.883: INFO: Waiting for responses: map[]
Oct 23 03:53:51.887: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:51.887: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:52.267: INFO: Waiting for responses: map[]
Oct 23 03:53:54.271: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:54.271: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:54.367: INFO: Waiting for responses: map[]
Oct 23 03:53:56.371: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:56.371: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:56.520: INFO: Waiting for responses: map[]
Oct 23 03:53:58.525: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:53:58.525: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:53:58.649: INFO: Waiting for responses: map[]
Oct 23 03:54:00.654: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:54:00.654: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:54:00.756: INFO: Waiting for responses: map[]
Oct 23 03:54:02.760: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.132:9080/dial?request=hostname&protocol=http&host=10.233.59.208&port=80&tries=1'] Namespace:nettest-3878 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Oct 23 03:54:02.760: INFO: >>> kubeConfig: /root/.kube/config
Oct 23 03:54:02.905: INFO: Waiting for responses: map[]
Oct 23 03:54:02.905: INFO: reached 10.233.59.208 after 33/34 tries
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:54:02.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3878" for this suite.


• [SLOW TEST:116.132 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should update endpoints: http
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:334
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: http","total":-1,"completed":2,"skipped":512,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:24.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
STEP: creating RC slow-terminating-unready-pod with selectors map[name:slow-terminating-unready-pod]
STEP: creating Service tolerate-unready with selectors map[name:slow-terminating-unready-pod testid:tolerate-unready-56b5d3ba-d4fb-45f0-afe8-e99c0cc3f491]
STEP: Verifying pods for RC slow-terminating-unready-pod
Oct 23 03:53:24.167: INFO: Pod name slow-terminating-unready-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
STEP: trying to dial each unique pod
Oct 23 03:53:40.181: INFO: Controller slow-terminating-unready-pod: Got non-empty result from replica 1 [slow-terminating-unready-pod-dpdn8]: "NOW: 2021-10-23 03:53:40.181247431 +0000 UTC m=+2.905473453", 1 of 1 required successes so far
STEP: Waiting for endpoints of Service with DNS name tolerate-unready.services-9243.svc.cluster.local
Oct 23 03:53:40.182: INFO: Creating new exec pod
Oct 23 03:53:54.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9243 exec execpod-6hr46 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9243.svc.cluster.local:80/'
Oct 23 03:53:54.430: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-9243.svc.cluster.local:80/\n"
Oct 23 03:53:54.430: INFO: stdout: "NOW: 2021-10-23 03:53:54.423190527 +0000 UTC m=+17.147416496"
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-9243 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Oct 23 03:53:59.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9243 exec execpod-6hr46 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9243.svc.cluster.local:80/; test "$?" -ne "0"'
Oct 23 03:53:59.875: INFO: rc: 1
Oct 23 03:53:59.875: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: NOW: 2021-10-23 03:53:59.858979242 +0000 UTC m=+22.583205211, err error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9243 exec execpod-6hr46 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9243.svc.cluster.local:80/; test "$?" -ne "0":
Command stdout:
NOW: 2021-10-23 03:53:59.858979242 +0000 UTC m=+22.583205211
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-9243.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Oct 23 03:54:01.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9243 exec execpod-6hr46 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9243.svc.cluster.local:80/; test "$?" -ne "0"'
Oct 23 03:54:03.149: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-9243.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Oct 23 03:54:03.149: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Oct 23 03:54:03.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9243 exec execpod-6hr46 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-9243.svc.cluster.local:80/'
Oct 23 03:54:03.416: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-9243.svc.cluster.local:80/\n"
Oct 23 03:54:03.416: INFO: stdout: "NOW: 2021-10-23 03:54:03.408449536 +0000 UTC m=+26.132675505"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-9243
STEP: deleting service tolerate-unready in namespace services-9243
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:54:03.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9243" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:39.320 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1624
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":3,"skipped":453,"failed":0}
Oct 23 03:54:03.454: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:39.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for endpoint-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256
STEP: Performing setup for networking test in namespace nettest-3739
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:53:39.943: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:39.977: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:41.980: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:43.981: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:45.982: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:47.981: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:49.980: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:51.981: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:53.981: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:55.981: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:57.980: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:59.981: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:54:01.980: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:54:01.986: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:54:06.007: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:54:06.007: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:54:06.014: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:54:06.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-3739" for this suite.


S [SKIPPING] [26.215 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for endpoint-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:256

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Oct 23 03:54:06.027: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:30.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
STEP: creating a UDP service svc-udp with type=ClusterIP in conntrack-8721
STEP: creating a client pod for probing the service svc-udp
Oct 23 03:53:30.547: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:32.550: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:34.551: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:36.552: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:38.551: INFO: The status of Pod pod-client is Running (Ready = true)
Oct 23 03:53:38.919: INFO: Pod client logs: Sat Oct 23 03:53:37 UTC 2021
Sat Oct 23 03:53:37 UTC 2021 Try: 1

Sat Oct 23 03:53:37 UTC 2021 Try: 2

Sat Oct 23 03:53:37 UTC 2021 Try: 3

Sat Oct 23 03:53:37 UTC 2021 Try: 4

Sat Oct 23 03:53:37 UTC 2021 Try: 5

Sat Oct 23 03:53:37 UTC 2021 Try: 6

Sat Oct 23 03:53:37 UTC 2021 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Oct 23 03:53:38.933: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:40.936: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:42.937: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:44.937: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-8721 to expose endpoints map[pod-server-1:[80]]
Oct 23 03:53:44.949: INFO: successfully validated that service svc-udp in namespace conntrack-8721 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
STEP: creating a second backend pod pod-server-2 for the service svc-udp
Oct 23 03:53:54.974: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:56.978: INFO: The status of Pod pod-server-2 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:58.978: INFO: The status of Pod pod-server-2 is Running (Ready = true)
Oct 23 03:53:58.980: INFO: Cleaning up pod-server-1 pod
Oct 23 03:53:58.985: INFO: Waiting for pod pod-server-1 to disappear
Oct 23 03:53:58.988: INFO: Pod pod-server-1 no longer exists
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-8721 to expose endpoints map[pod-server-2:[80]]
Oct 23 03:53:58.996: INFO: successfully validated that service svc-udp in namespace conntrack-8721 exposes endpoints map[pod-server-2:[80]]
STEP: checking client pod connected to the backend 2 on Node IP 10.10.190.208
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:54:09.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-8721" for this suite.


• [SLOW TEST:38.534 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:203
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":2,"skipped":378,"failed":0}
Oct 23 03:54:09.034: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:43.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should be able to handle large requests: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461
STEP: Performing setup for networking test in namespace nettest-7025
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:53:43.353: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:43.385: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:45.388: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:47.389: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:49.390: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:51.393: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:53.389: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:55.392: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:57.387: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:53:59.390: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:54:01.393: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:54:03.393: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:54:03.397: INFO: The status of Pod netserver-1 is Running (Ready = false)
Oct 23 03:54:05.404: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:54:09.426: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:54:09.426: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:54:09.432: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:54:09.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7025" for this suite.


S [SKIPPING] [26.201 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should be able to handle large requests: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:461

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Oct 23 03:54:09.444: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:53.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:83
STEP: Executing a successful http request from the external internet
[It] should function for pod-Service: udp
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168
STEP: Performing setup for networking test in namespace nettest-4150
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 23 03:53:53.810: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:53:53.841: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:55.845: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:57.845: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:59.849: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:54:01.846: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:54:03.845: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:54:05.848: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:54:07.844: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:54:09.847: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:54:11.848: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:54:13.844: INFO: The status of Pod netserver-0 is Running (Ready = false)
Oct 23 03:54:15.844: INFO: The status of Pod netserver-0 is Running (Ready = true)
Oct 23 03:54:15.850: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Oct 23 03:54:19.876: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2
STEP: Getting node addresses
Oct 23 03:54:19.876: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Oct 23 03:54:19.883: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:54:19.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-4150" for this suite.


S [SKIPPING] [26.230 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  Granular Checks: Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:151
    should function for pod-Service: udp [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:168

    Requires at least 2 nodes (not -1)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
Oct 23 03:54:19.894: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:51:49.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
W1023 03:51:49.224816      27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Oct 23 03:51:49.225: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled
Oct 23 03:51:49.226: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
STEP: creating up-down-1 in namespace services-9810
STEP: creating service up-down-1 in namespace services-9810
STEP: creating replication controller up-down-1 in namespace services-9810
I1023 03:51:49.239720      27 runners.go:190] Created replication controller with name: up-down-1, namespace: services-9810, replica count: 3
I1023 03:51:52.291717      27 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:51:55.293487      27 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:51:58.297129      27 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:01.298612      27 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:04.298923      27 runners.go:190] up-down-1 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:07.299257      27 runners.go:190] up-down-1 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:10.300007      27 runners.go:190] up-down-1 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:13.301231      27 runners.go:190] up-down-1 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating up-down-2 in namespace services-9810
STEP: creating service up-down-2 in namespace services-9810
STEP: creating replication controller up-down-2 in namespace services-9810
I1023 03:52:13.315930      27 runners.go:190] Created replication controller with name: up-down-2, namespace: services-9810, replica count: 3
I1023 03:52:16.369288      27 runners.go:190] up-down-2 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:19.370198      27 runners.go:190] up-down-2 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:22.371093      27 runners.go:190] up-down-2 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:25.372158      27 runners.go:190] up-down-2 Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:52:28.373113      27 runners.go:190] up-down-2 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-1 is up
Oct 23 03:52:28.376: INFO: Creating new host exec pod
Oct 23 03:52:28.393: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:30.396: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:32.397: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 23 03:52:32.398: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 23 03:52:38.416: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.0.248:80 2>&1 || true; echo; done" in pod services-9810/verify-service-up-host-exec-pod
Oct 23 03:52:38.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9810 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.0.248:80 2>&1 || true; echo; done'
Oct 23 03:52:38.993: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n"
Oct 23 03:52:38.993: INFO: stdout: "up-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-b9d7k\n"
Oct 23 03:52:38.994: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.0.248:80 2>&1 || true; echo; done" in pod services-9810/verify-service-up-exec-pod-l7x86
Oct 23 03:52:38.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9810 exec verify-service-up-exec-pod-l7x86 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.0.248:80 2>&1 || true; echo; done'
Oct 23 03:52:39.446: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.0.248:80\n+ echo\n"
Oct 23 03:52:39.447: INFO: stdout: "up-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-rfjbf\nup-down-1-b9d7k\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-4ndb4\nup-down-1-b9d7k\nup-down-1-b9d7k\nup-down-1-rfjbf\nup-down-1-rfjbf\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-9810
STEP: Deleting pod verify-service-up-exec-pod-l7x86 in namespace services-9810
STEP: verifying service up-down-2 is up
Oct 23 03:52:39.458: INFO: Creating new host exec pod
Oct 23 03:52:39.469: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:41.473: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:43.473: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:45.475: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:47.472: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:49.472: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:51.474: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:53.474: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:52:55.474: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 23 03:52:55.474: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 23 03:52:59.493: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.34.196:80 2>&1 || true; echo; done" in pod services-9810/verify-service-up-host-exec-pod
Oct 23 03:52:59.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9810 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.34.196:80 2>&1 || true; echo; done'
Oct 23 03:52:59.927: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n"
Oct 23 03:52:59.928: INFO: stdout: "up-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\n"
Oct 23 03:52:59.928: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.34.196:80 2>&1 || true; echo; done" in pod services-9810/verify-service-up-exec-pod-gtntv
Oct 23 03:52:59.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9810 exec verify-service-up-exec-pod-gtntv -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.34.196:80 2>&1 || true; echo; done'
Oct 23 03:53:00.299: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n"
Oct 23 03:53:00.300: INFO: stdout: "up-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-9810
STEP: Deleting pod verify-service-up-exec-pod-gtntv in namespace services-9810
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-9810, will wait for the garbage collector to delete the pods
Oct 23 03:53:00.387: INFO: Deleting ReplicationController up-down-1 took: 3.915863ms
Oct 23 03:53:00.487: INFO: Terminating ReplicationController up-down-1 pods took: 100.229179ms
STEP: verifying service up-down-1 is not up
Oct 23 03:53:14.399: INFO: Creating new host exec pod
Oct 23 03:53:14.413: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:16.416: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:18.417: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 23 03:53:18.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9810 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.0.248:80 && echo service-down-failed'
Oct 23 03:53:20.710: INFO: rc: 28
Oct 23 03:53:20.710: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.0.248:80 && echo service-down-failed" in pod services-9810/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9810 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.0.248:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.0.248:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-9810
STEP: verifying service up-down-2 is still up
Oct 23 03:53:20.717: INFO: Creating new host exec pod
Oct 23 03:53:20.730: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:22.734: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:24.735: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 23 03:53:24.735: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 23 03:53:38.750: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.34.196:80 2>&1 || true; echo; done" in pod services-9810/verify-service-up-host-exec-pod
Oct 23 03:53:38.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9810 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.34.196:80 2>&1 || true; echo; done'
Oct 23 03:53:39.395: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n"
Oct 23 03:53:39.396: INFO: stdout: "up-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\n"
Oct 23 03:53:39.396: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.34.196:80 2>&1 || true; echo; done" in pod services-9810/verify-service-up-exec-pod-7l9gh
Oct 23 03:53:39.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9810 exec verify-service-up-exec-pod-7l9gh -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.34.196:80 2>&1 || true; echo; done'
Oct 23 03:53:39.779: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n"
Oct 23 03:53:39.779: INFO: stdout: "up-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-9810
STEP: Deleting pod verify-service-up-exec-pod-7l9gh in namespace services-9810
STEP: creating service up-down-3 in namespace services-9810
STEP: creating service up-down-3 in namespace services-9810
STEP: creating replication controller up-down-3 in namespace services-9810
I1023 03:53:39.798935      27 runners.go:190] Created replication controller with name: up-down-3, namespace: services-9810, replica count: 3
I1023 03:53:42.851804      27 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:53:45.853150      27 runners.go:190] up-down-3 Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:53:48.853697      27 runners.go:190] up-down-3 Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:53:51.854147      27 runners.go:190] up-down-3 Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service up-down-2 is still up
Oct 23 03:53:51.857: INFO: Creating new host exec pod
Oct 23 03:53:51.877: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:53.879: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:55.884: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:53:57.882: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 23 03:53:57.882: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 23 03:54:03.898: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.34.196:80 2>&1 || true; echo; done" in pod services-9810/verify-service-up-host-exec-pod
Oct 23 03:54:03.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9810 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.34.196:80 2>&1 || true; echo; done'
Oct 23 03:54:04.542: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n"
Oct 23 03:54:04.543: INFO: stdout: "up-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\n"
Oct 23 03:54:04.543: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.34.196:80 2>&1 || true; echo; done" in pod services-9810/verify-service-up-exec-pod-2q7ws
Oct 23 03:54:04.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9810 exec verify-service-up-exec-pod-2q7ws -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.34.196:80 2>&1 || true; echo; done'
Oct 23 03:54:04.958: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.34.196:80\n+ echo\n"
Oct 23 03:54:04.958: INFO: stdout: "up-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-zhpd2\nup-down-2-6dx4s\nup-down-2-zhpd2\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-dk746\nup-down-2-dk746\nup-down-2-6dx4s\nup-down-2-6dx4s\nup-down-2-6dx4s\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-9810
STEP: Deleting pod verify-service-up-exec-pod-2q7ws in namespace services-9810
STEP: verifying service up-down-3 is up
Oct 23 03:54:04.974: INFO: Creating new host exec pod
Oct 23 03:54:04.987: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:06.991: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:08.991: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:10.991: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:12.990: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:14.992: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:16.992: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:18.991: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:20.991: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 23 03:54:20.992: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 23 03:54:25.007: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.1.183:80 2>&1 || true; echo; done" in pod services-9810/verify-service-up-host-exec-pod
Oct 23 03:54:25.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9810 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.1.183:80 2>&1 || true; echo; done'
Oct 23 03:54:25.418: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n"
Oct 23 03:54:25.418: INFO: stdout: "up-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\n"
Oct 23 03:54:25.419: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.1.183:80 2>&1 || true; echo; done" in pod services-9810/verify-service-up-exec-pod-xwggl
Oct 23 03:54:25.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9810 exec verify-service-up-exec-pod-xwggl -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.1.183:80 2>&1 || true; echo; done'
Oct 23 03:54:25.845: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.1.183:80\n+ echo\n"
Oct 23 03:54:25.846: INFO: stdout: "up-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-kddjc\nup-down-3-kddjc\nup-down-3-gqlhk\nup-down-3-kddjc\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-qtcv9\nup-down-3-kddjc\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-9810
STEP: Deleting pod verify-service-up-exec-pod-xwggl in namespace services-9810
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:54:25.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9810" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:156.715 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":1,"skipped":128,"failed":0}
Oct 23 03:54:25.876: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:40.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
STEP: creating service-headless in namespace services-1445
STEP: creating service service-headless in namespace services-1445
STEP: creating replication controller service-headless in namespace services-1445
I1023 03:53:40.699750      33 runners.go:190] Created replication controller with name: service-headless, namespace: services-1445, replica count: 3
I1023 03:53:43.751479      33 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:53:46.753309      33 runners.go:190] service-headless Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:53:49.753928      33 runners.go:190] service-headless Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:53:52.754077      33 runners.go:190] service-headless Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:53:55.755851      33 runners.go:190] service-headless Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: creating service in namespace services-1445
STEP: creating service service-headless-toggled in namespace services-1445
STEP: creating replication controller service-headless-toggled in namespace services-1445
I1023 03:53:55.767063      33 runners.go:190] Created replication controller with name: service-headless-toggled, namespace: services-1445, replica count: 3
I1023 03:53:58.818380      33 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:54:01.820199      33 runners.go:190] service-headless-toggled Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: verifying service is up
Oct 23 03:54:01.822: INFO: Creating new host exec pod
Oct 23 03:54:01.837: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:03.842: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:05.841: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 23 03:54:05.841: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 23 03:54:09.857: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.30.158:80 2>&1 || true; echo; done" in pod services-1445/verify-service-up-host-exec-pod
Oct 23 03:54:09.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1445 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.30.158:80 2>&1 || true; echo; done'
Oct 23 03:54:10.184: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n"
Oct 23 03:54:10.184: INFO: stdout: "service-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\n"
Oct 23 03:54:10.185: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.30.158:80 2>&1 || true; echo; done" in pod services-1445/verify-service-up-exec-pod-xtld4
Oct 23 03:54:10.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1445 exec verify-service-up-exec-pod-xtld4 -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.30.158:80 2>&1 || true; echo; done'
Oct 23 03:54:10.551: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n"
Oct 23 03:54:10.552: INFO: stdout: "service-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1445
STEP: Deleting pod verify-service-up-exec-pod-xtld4 in namespace services-1445
STEP: verifying service-headless is not up
Oct 23 03:54:10.568: INFO: Creating new host exec pod
Oct 23 03:54:10.579: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:12.584: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:14.583: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 23 03:54:14.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1445 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.42.128:80 && echo service-down-failed'
Oct 23 03:54:16.878: INFO: rc: 28
Oct 23 03:54:16.878: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.42.128:80 && echo service-down-failed" in pod services-1445/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1445 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.42.128:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.42.128:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1445
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
Oct 23 03:54:16.896: INFO: Creating new host exec pod
Oct 23 03:54:16.907: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:18.910: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:20.912: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:22.912: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:24.911: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:26.910: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:28.918: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:30.914: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:32.910: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 23 03:54:32.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1445 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.30.158:80 && echo service-down-failed'
Oct 23 03:54:35.247: INFO: rc: 28
Oct 23 03:54:35.247: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.30.158:80 && echo service-down-failed" in pod services-1445/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1445 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.30.158:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.30.158:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1445
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
Oct 23 03:54:35.265: INFO: Creating new host exec pod
Oct 23 03:54:35.279: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:37.283: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:39.282: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:41.283: INFO: The status of Pod verify-service-up-host-exec-pod is Running (Ready = true)
Oct 23 03:54:41.284: INFO: Creating new exec pod
STEP: verifying service has 3 reachable backends
Oct 23 03:54:45.302: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.30.158:80 2>&1 || true; echo; done" in pod services-1445/verify-service-up-host-exec-pod
Oct 23 03:54:45.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1445 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.30.158:80 2>&1 || true; echo; done'
Oct 23 03:54:45.656: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n"
Oct 23 03:54:45.656: INFO: stdout: "service-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\n"
Oct 23 03:54:45.657: INFO: Executing cmd "for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.30.158:80 2>&1 || true; echo; done" in pod services-1445/verify-service-up-exec-pod-2r7jb
Oct 23 03:54:45.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1445 exec verify-service-up-exec-pod-2r7jb -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://10.233.30.158:80 2>&1 || true; echo; done'
Oct 23 03:54:46.020: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n+ wget -q -T 1 -O - http://10.233.30.158:80\n+ echo\n"
Oct 23 03:54:46.020: INFO: stdout: "service-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-rghdp\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-z7wmc\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-rghdp\nservice-headless-toggled-q89s8\nservice-headless-toggled-q89s8\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-1445
STEP: Deleting pod verify-service-up-exec-pod-2r7jb in namespace services-1445
STEP: verifying service-headless is still not up
Oct 23 03:54:46.036: INFO: Creating new host exec pod
Oct 23 03:54:46.048: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:48.052: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Oct 23 03:54:48.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1445 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.42.128:80 && echo service-down-failed'
Oct 23 03:54:50.288: INFO: rc: 28
Oct 23 03:54:50.289: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://10.233.42.128:80 && echo service-down-failed" in pod services-1445/verify-service-down-host-exec-pod: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-1445 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://10.233.42.128:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://10.233.42.128:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1445
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 23 03:54:50.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1445" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• [SLOW TEST:69.634 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1916
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":2,"skipped":550,"failed":0}
Oct 23 03:54:50.308: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:58.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96
[It] should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
STEP: creating a UDP service svc-udp with type=NodePort in conntrack-2574
STEP: creating a client pod for probing the service svc-udp
Oct 23 03:53:58.662: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:00.668: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:02.667: INFO: The status of Pod pod-client is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:04.666: INFO: The status of Pod pod-client is Running (Ready = true)
Oct 23 03:54:04.675: INFO: Pod client logs: Sat Oct 23 03:54:02 UTC 2021
Sat Oct 23 03:54:02 UTC 2021 Try: 1

Sat Oct 23 03:54:02 UTC 2021 Try: 2

Sat Oct 23 03:54:02 UTC 2021 Try: 3

Sat Oct 23 03:54:02 UTC 2021 Try: 4

Sat Oct 23 03:54:02 UTC 2021 Try: 5

Sat Oct 23 03:54:02 UTC 2021 Try: 6

Sat Oct 23 03:54:02 UTC 2021 Try: 7

STEP: creating a backend pod pod-server-1 for the service svc-udp
Oct 23 03:54:04.687: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:06.691: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:08.690: INFO: The status of Pod pod-server-1 is Pending, waiting for it to be Running (with Ready = true)
Oct 23 03:54:10.692: INFO: The status of Pod pod-server-1 is Running (Ready = true)
STEP: waiting up to 3m0s for service svc-udp in namespace conntrack-2574 to expose endpoints map[pod-server-1:[80]]
Oct 23 03:54:10.703: INFO: successfully validated that service svc-udp in namespace conntrack-2574 exposes endpoints map[pod-server-1:[80]]
STEP: checking client pod connected to the backend 1 on Node IP 10.10.190.208
Oct 23 03:55:10.736: INFO: Pod client logs: Sat Oct 23 03:54:02 UTC 2021
Sat Oct 23 03:54:02 UTC 2021 Try: 1

Sat Oct 23 03:54:02 UTC 2021 Try: 2

Sat Oct 23 03:54:02 UTC 2021 Try: 3

Sat Oct 23 03:54:02 UTC 2021 Try: 4

Sat Oct 23 03:54:02 UTC 2021 Try: 5

Sat Oct 23 03:54:02 UTC 2021 Try: 6

Sat Oct 23 03:54:02 UTC 2021 Try: 7

Sat Oct 23 03:54:07 UTC 2021 Try: 8

Sat Oct 23 03:54:07 UTC 2021 Try: 9

Sat Oct 23 03:54:07 UTC 2021 Try: 10

Sat Oct 23 03:54:07 UTC 2021 Try: 11

Sat Oct 23 03:54:07 UTC 2021 Try: 12

Sat Oct 23 03:54:07 UTC 2021 Try: 13

Sat Oct 23 03:54:12 UTC 2021 Try: 14

Sat Oct 23 03:54:12 UTC 2021 Try: 15

Sat Oct 23 03:54:12 UTC 2021 Try: 16

Sat Oct 23 03:54:12 UTC 2021 Try: 17

Sat Oct 23 03:54:12 UTC 2021 Try: 18

Sat Oct 23 03:54:12 UTC 2021 Try: 19

Sat Oct 23 03:54:17 UTC 2021 Try: 20

Sat Oct 23 03:54:17 UTC 2021 Try: 21

Sat Oct 23 03:54:17 UTC 2021 Try: 22

Sat Oct 23 03:54:17 UTC 2021 Try: 23

Sat Oct 23 03:54:17 UTC 2021 Try: 24

Sat Oct 23 03:54:17 UTC 2021 Try: 25

Sat Oct 23 03:54:22 UTC 2021 Try: 26

Sat Oct 23 03:54:22 UTC 2021 Try: 27

Sat Oct 23 03:54:22 UTC 2021 Try: 28

Sat Oct 23 03:54:22 UTC 2021 Try: 29

Sat Oct 23 03:54:22 UTC 2021 Try: 30

Sat Oct 23 03:54:22 UTC 2021 Try: 31

Sat Oct 23 03:54:27 UTC 2021 Try: 32

Sat Oct 23 03:54:27 UTC 2021 Try: 33

Sat Oct 23 03:54:27 UTC 2021 Try: 34

Sat Oct 23 03:54:27 UTC 2021 Try: 35

Sat Oct 23 03:54:27 UTC 2021 Try: 36

Sat Oct 23 03:54:27 UTC 2021 Try: 37

Sat Oct 23 03:54:32 UTC 2021 Try: 38

Sat Oct 23 03:54:32 UTC 2021 Try: 39

Sat Oct 23 03:54:32 UTC 2021 Try: 40

Sat Oct 23 03:54:32 UTC 2021 Try: 41

Sat Oct 23 03:54:32 UTC 2021 Try: 42

Sat Oct 23 03:54:32 UTC 2021 Try: 43

Sat Oct 23 03:54:37 UTC 2021 Try: 44

Sat Oct 23 03:54:37 UTC 2021 Try: 45

Sat Oct 23 03:54:37 UTC 2021 Try: 46

Sat Oct 23 03:54:37 UTC 2021 Try: 47

Sat Oct 23 03:54:37 UTC 2021 Try: 48

Sat Oct 23 03:54:37 UTC 2021 Try: 49

Sat Oct 23 03:54:42 UTC 2021 Try: 50

Sat Oct 23 03:54:42 UTC 2021 Try: 51

Sat Oct 23 03:54:42 UTC 2021 Try: 52

Sat Oct 23 03:54:42 UTC 2021 Try: 53

Sat Oct 23 03:54:42 UTC 2021 Try: 54

Sat Oct 23 03:54:42 UTC 2021 Try: 55

Sat Oct 23 03:54:47 UTC 2021 Try: 56

Sat Oct 23 03:54:47 UTC 2021 Try: 57

Sat Oct 23 03:54:47 UTC 2021 Try: 58

Sat Oct 23 03:54:47 UTC 2021 Try: 59

Sat Oct 23 03:54:47 UTC 2021 Try: 60

Sat Oct 23 03:54:47 UTC 2021 Try: 61

Sat Oct 23 03:54:52 UTC 2021 Try: 62

Sat Oct 23 03:54:52 UTC 2021 Try: 63

Sat Oct 23 03:54:52 UTC 2021 Try: 64

Sat Oct 23 03:54:52 UTC 2021 Try: 65

Sat Oct 23 03:54:52 UTC 2021 Try: 66

Sat Oct 23 03:54:52 UTC 2021 Try: 67

Sat Oct 23 03:54:57 UTC 2021 Try: 68

Sat Oct 23 03:54:57 UTC 2021 Try: 69

Sat Oct 23 03:54:57 UTC 2021 Try: 70

Sat Oct 23 03:54:57 UTC 2021 Try: 71

Sat Oct 23 03:54:57 UTC 2021 Try: 72

Sat Oct 23 03:54:57 UTC 2021 Try: 73

Sat Oct 23 03:55:02 UTC 2021 Try: 74

Sat Oct 23 03:55:02 UTC 2021 Try: 75

Sat Oct 23 03:55:02 UTC 2021 Try: 76

Sat Oct 23 03:55:02 UTC 2021 Try: 77

Sat Oct 23 03:55:02 UTC 2021 Try: 78

Sat Oct 23 03:55:02 UTC 2021 Try: 79

Sat Oct 23 03:55:07 UTC 2021 Try: 80

Sat Oct 23 03:55:07 UTC 2021 Try: 81

Sat Oct 23 03:55:07 UTC 2021 Try: 82

Sat Oct 23 03:55:07 UTC 2021 Try: 83

Sat Oct 23 03:55:07 UTC 2021 Try: 84

Sat Oct 23 03:55:07 UTC 2021 Try: 85

Oct 23 03:55:10.736: FAIL: Failed to connect to backend 1

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001515680)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001515680)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001515680, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "conntrack-2574".
STEP: Found 8 events.
Oct 23 03:55:10.740: INFO: At 2021-10-23 03:54:01 +0000 UTC - event for pod-client: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 23 03:55:10.741: INFO: At 2021-10-23 03:54:02 +0000 UTC - event for pod-client: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 289.200231ms
Oct 23 03:55:10.741: INFO: At 2021-10-23 03:54:02 +0000 UTC - event for pod-client: {kubelet node1} Created: Created container pod-client
Oct 23 03:55:10.741: INFO: At 2021-10-23 03:54:02 +0000 UTC - event for pod-client: {kubelet node1} Started: Started container pod-client
Oct 23 03:55:10.741: INFO: At 2021-10-23 03:54:06 +0000 UTC - event for pod-server-1: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 23 03:55:10.741: INFO: At 2021-10-23 03:54:07 +0000 UTC - event for pod-server-1: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 548.769922ms
Oct 23 03:55:10.741: INFO: At 2021-10-23 03:54:07 +0000 UTC - event for pod-server-1: {kubelet node2} Created: Created container agnhost-container
Oct 23 03:55:10.741: INFO: At 2021-10-23 03:54:07 +0000 UTC - event for pod-server-1: {kubelet node2} Started: Started container agnhost-container
Oct 23 03:55:10.743: INFO: POD           NODE   PHASE    GRACE  CONDITIONS
Oct 23 03:55:10.743: INFO: pod-client    node1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:53:58 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:54:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:54:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:53:58 +0000 UTC  }]
Oct 23 03:55:10.743: INFO: pod-server-1  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:54:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:54:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:54:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:54:04 +0000 UTC  }]
Oct 23 03:55:10.743: INFO: 
Oct 23 03:55:10.748: INFO: 
Logging node info for node master1
Oct 23 03:55:10.751: INFO: Node Info: &Node{ObjectMeta:{master1    1b0e9b6c-fa73-4303-880f-3c662903b3ba 146773 0 2021-10-22 21:03:37 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:00 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:00 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:00 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 03:55:00 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 23 03:55:10.751: INFO: 
Logging kubelet events for node master1
Oct 23 03:55:10.753: INFO: 
Logging pods the kubelet thinks is on node master1
Oct 23 03:55:10.782: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded)
Oct 23 03:55:10.782: INFO: 	Init container install-cni ready: true, restart count 1
Oct 23 03:55:10.782: INFO: 	Container kube-flannel ready: true, restart count 1
Oct 23 03:55:10.782: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:10.782: INFO: 	Container kube-multus ready: true, restart count 1
Oct 23 03:55:10.782: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:10.782: INFO: 	Container coredns ready: true, restart count 2
Oct 23 03:55:10.782: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:55:10.782: INFO: 	Container docker-registry ready: true, restart count 0
Oct 23 03:55:10.782: INFO: 	Container nginx ready: true, restart count 0
Oct 23 03:55:10.782: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:55:10.782: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 23 03:55:10.782: INFO: 	Container node-exporter ready: true, restart count 0
Oct 23 03:55:10.782: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:10.782: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 23 03:55:10.782: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:10.782: INFO: 	Container kube-controller-manager ready: true, restart count 1
Oct 23 03:55:10.782: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:10.782: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 23 03:55:10.782: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:10.782: INFO: 	Container kube-scheduler ready: true, restart count 0
W1023 03:55:10.797178      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 23 03:55:10.871: INFO: 
Latency metrics for node master1
Oct 23 03:55:10.871: INFO: 
Logging node info for node master2
Oct 23 03:55:10.874: INFO: Node Info: &Node{ObjectMeta:{master2    48070097-b11c-473d-9240-f4ee02bd7e2f 146786 0 2021-10-22 21:04:08 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:04 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:04 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:04 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 03:55:04 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 23 03:55:10.874: INFO: 
Logging kubelet events for node master2
Oct 23 03:55:10.878: INFO: 
Logging pods the kubelet thinks is on node master2
Oct 23 03:55:10.900: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:10.900: INFO: 	Container kube-controller-manager ready: true, restart count 2
Oct 23 03:55:10.900: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:10.900: INFO: 	Container kube-scheduler ready: true, restart count 2
Oct 23 03:55:10.900: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:10.900: INFO: 	Container kube-proxy ready: true, restart count 2
Oct 23 03:55:10.900: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded)
Oct 23 03:55:10.900: INFO: 	Init container install-cni ready: true, restart count 2
Oct 23 03:55:10.900: INFO: 	Container kube-flannel ready: true, restart count 1
Oct 23 03:55:10.900: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:10.900: INFO: 	Container kube-multus ready: true, restart count 1
Oct 23 03:55:10.900: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:10.900: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 23 03:55:10.900: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:10.900: INFO: 	Container autoscaler ready: true, restart count 1
Oct 23 03:55:10.900: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:55:10.900: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 23 03:55:10.900: INFO: 	Container node-exporter ready: true, restart count 0
W1023 03:55:10.912696      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 23 03:55:11.285: INFO: 
Latency metrics for node master2
Oct 23 03:55:11.285: INFO: 
Logging node info for node master3
Oct 23 03:55:11.288: INFO: Node Info: &Node{ObjectMeta:{master3    fe22a467-e2de-4b64-9399-d274e6d13231 146797 0 2021-10-22 21:04:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:09 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:09 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:09 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 03:55:09 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 23 03:55:11.289: INFO: 
Logging kubelet events for node master3
Oct 23 03:55:11.291: INFO: 
Logging pods the kubelet thinks is on node master3
Oct 23 03:55:11.308: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.308: INFO: 	Container kube-scheduler ready: true, restart count 2
Oct 23 03:55:11.308: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.308: INFO: 	Container kube-multus ready: true, restart count 1
Oct 23 03:55:11.308: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.308: INFO: 	Container coredns ready: true, restart count 2
Oct 23 03:55:11.308: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.308: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 23 03:55:11.308: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.308: INFO: 	Container kube-controller-manager ready: true, restart count 2
Oct 23 03:55:11.308: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.308: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 23 03:55:11.308: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded)
Oct 23 03:55:11.308: INFO: 	Init container install-cni ready: true, restart count 1
Oct 23 03:55:11.308: INFO: 	Container kube-flannel ready: true, restart count 1
Oct 23 03:55:11.308: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.308: INFO: 	Container nfd-controller ready: true, restart count 0
Oct 23 03:55:11.308: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:55:11.308: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 23 03:55:11.308: INFO: 	Container node-exporter ready: true, restart count 0
W1023 03:55:11.323100      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 23 03:55:11.399: INFO: 
Latency metrics for node master3
Oct 23 03:55:11.399: INFO: 
Logging node info for node node1
Oct 23 03:55:11.401: INFO: Node Info: &Node{ObjectMeta:{node1    1c590bf6-8845-4681-8fa1-7acc55183d29 146775 0 2021-10-22 21:05:23 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 02:09:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:02 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:02 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:02 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 03:55:02 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 23 03:55:11.402: INFO: 
Logging kubelet events for node node1
Oct 23 03:55:11.404: INFO: 
Logging pods the kubelet thinks is on node node1
Oct 23 03:55:11.419: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded)
Oct 23 03:55:11.419: INFO: 	Init container install-cni ready: true, restart count 2
Oct 23 03:55:11.419: INFO: 	Container kube-flannel ready: true, restart count 3
Oct 23 03:55:11.419: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.419: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Oct 23 03:55:11.419: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:55:11.419: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 23 03:55:11.419: INFO: 	Container node-exporter ready: true, restart count 0
Oct 23 03:55:11.419: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.419: INFO: 	Container kube-sriovdp ready: true, restart count 0
Oct 23 03:55:11.419: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.419: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Oct 23 03:55:11.419: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded)
Oct 23 03:55:11.419: INFO: 	Container config-reloader ready: true, restart count 0
Oct 23 03:55:11.419: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Oct 23 03:55:11.419: INFO: 	Container grafana ready: true, restart count 0
Oct 23 03:55:11.419: INFO: 	Container prometheus ready: true, restart count 1
Oct 23 03:55:11.419: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded)
Oct 23 03:55:11.419: INFO: 	Container collectd ready: true, restart count 0
Oct 23 03:55:11.419: INFO: 	Container collectd-exporter ready: true, restart count 0
Oct 23 03:55:11.419: INFO: 	Container rbac-proxy ready: true, restart count 0
Oct 23 03:55:11.419: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:55:11.419: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 23 03:55:11.419: INFO: 	Container prometheus-operator ready: true, restart count 0
Oct 23 03:55:11.419: INFO: pod-client started at 2021-10-23 03:53:58 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.419: INFO: 	Container pod-client ready: true, restart count 0
Oct 23 03:55:11.419: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.419: INFO: 	Container kube-proxy ready: true, restart count 2
Oct 23 03:55:11.419: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.419: INFO: 	Container kube-multus ready: true, restart count 1
Oct 23 03:55:11.419: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.419: INFO: 	Container nfd-worker ready: true, restart count 0
Oct 23 03:55:11.419: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded)
Oct 23 03:55:11.419: INFO: 	Container discover ready: false, restart count 0
Oct 23 03:55:11.419: INFO: 	Container init ready: false, restart count 0
Oct 23 03:55:11.419: INFO: 	Container install ready: false, restart count 0
Oct 23 03:55:11.419: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:55:11.419: INFO: 	Container nodereport ready: true, restart count 0
Oct 23 03:55:11.419: INFO: 	Container reconcile ready: true, restart count 0
Oct 23 03:55:11.419: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.419: INFO: 	Container nginx-proxy ready: true, restart count 2
W1023 03:55:11.439795      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 23 03:55:11.589: INFO: 
Latency metrics for node node1
Oct 23 03:55:11.589: INFO: 
Logging node info for node node2
Oct 23 03:55:11.592: INFO: Node Info: &Node{ObjectMeta:{node2    bdba54c1-d4eb-4c09-a343-50f320ccb048 146792 0 2021-10-22 21:05:23 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 02:09:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:06 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:06 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:06 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 03:55:06 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 23 03:55:11.593: INFO: 
Logging kubelet events for node node2
Oct 23 03:55:11.596: INFO: 
Logging pods the kubelet thinks is on node node2
Oct 23 03:55:11.612: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded)
Oct 23 03:55:11.612: INFO: 	Init container install-cni ready: true, restart count 1
Oct 23 03:55:11.612: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 23 03:55:11.612: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.612: INFO: 	Container tas-extender ready: true, restart count 0
Oct 23 03:55:11.612: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded)
Oct 23 03:55:11.612: INFO: 	Container collectd ready: true, restart count 0
Oct 23 03:55:11.612: INFO: 	Container collectd-exporter ready: true, restart count 0
Oct 23 03:55:11.612: INFO: 	Container rbac-proxy ready: true, restart count 0
Oct 23 03:55:11.612: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.612: INFO: 	Container nginx-proxy ready: true, restart count 2
Oct 23 03:55:11.612: INFO: pod-server-1 started at 2021-10-23 03:54:04 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.612: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 23 03:55:11.612: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:55:11.612: INFO: 	Container nodereport ready: true, restart count 1
Oct 23 03:55:11.612: INFO: 	Container reconcile ready: true, restart count 0
Oct 23 03:55:11.612: INFO: nodeport-update-service-vbg6b started at 2021-10-23 03:53:48 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.612: INFO: 	Container nodeport-update-service ready: true, restart count 0
Oct 23 03:55:11.612: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.612: INFO: 	Container kube-proxy ready: true, restart count 2
Oct 23 03:55:11.612: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded)
Oct 23 03:55:11.612: INFO: 	Container discover ready: false, restart count 0
Oct 23 03:55:11.612: INFO: 	Container init ready: false, restart count 0
Oct 23 03:55:11.612: INFO: 	Container install ready: false, restart count 0
Oct 23 03:55:11.612: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.612: INFO: 	Container cmk-webhook ready: true, restart count 0
Oct 23 03:55:11.612: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.612: INFO: 	Container kube-multus ready: true, restart count 1
Oct 23 03:55:11.612: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.612: INFO: 	Container nfd-worker ready: true, restart count 0
Oct 23 03:55:11.612: INFO: execpodttjjx started at 2021-10-23 03:53:57 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.612: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 23 03:55:11.612: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.612: INFO: 	Container kube-sriovdp ready: true, restart count 0
Oct 23 03:55:11.612: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:55:11.612: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 23 03:55:11.612: INFO: 	Container node-exporter ready: true, restart count 0
Oct 23 03:55:11.612: INFO: nodeport-update-service-nc97l started at 2021-10-23 03:53:48 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:55:11.612: INFO: 	Container nodeport-update-service ready: true, restart count 0
W1023 03:55:11.626277      28 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 23 03:55:11.755: INFO: 
Latency metrics for node node2
Oct 23 03:55:11.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conntrack-2574" for this suite.


• Failure [73.148 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130

  Oct 23 03:55:10.736: Failed to connect to backend 1

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":1,"skipped":301,"failed":1,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service"]}
Oct 23 03:55:11.770: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Oct 23 03:53:48.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
[It] should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
STEP: creating a TCP service nodeport-update-service with type=ClusterIP in namespace services-484
Oct 23 03:53:48.051: INFO: Service Port TCP: 80
STEP: changing the TCP service to type=NodePort
STEP: creating replication controller nodeport-update-service in namespace services-484
I1023 03:53:48.063295      24 runners.go:190] Created replication controller with name: nodeport-update-service, namespace: services-484, replica count: 2
I1023 03:53:51.114346      24 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:53:54.114808      24 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1023 03:53:57.115989      24 runners.go:190] nodeport-update-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 23 03:53:57.116: INFO: Creating new exec pod
Oct 23 03:54:04.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:04.424: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:04.424: INFO: stdout: ""
Oct 23 03:54:05.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:05.740: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:05.740: INFO: stdout: ""
Oct 23 03:54:06.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:06.756: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:06.757: INFO: stdout: ""
Oct 23 03:54:07.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:07.871: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:07.871: INFO: stdout: ""
Oct 23 03:54:08.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:08.935: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:08.935: INFO: stdout: ""
Oct 23 03:54:09.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:09.691: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:09.691: INFO: stdout: ""
Oct 23 03:54:10.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:10.762: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:10.762: INFO: stdout: ""
Oct 23 03:54:11.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:11.788: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:11.788: INFO: stdout: ""
Oct 23 03:54:12.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:12.686: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:12.686: INFO: stdout: ""
Oct 23 03:54:13.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:13.734: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:13.734: INFO: stdout: ""
Oct 23 03:54:14.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:14.825: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:14.825: INFO: stdout: ""
Oct 23 03:54:15.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:15.666: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:15.666: INFO: stdout: ""
Oct 23 03:54:16.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:16.672: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:16.672: INFO: stdout: ""
Oct 23 03:54:17.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:18.170: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:18.170: INFO: stdout: ""
Oct 23 03:54:18.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:18.696: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:18.696: INFO: stdout: ""
Oct 23 03:54:19.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:19.686: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:19.686: INFO: stdout: ""
Oct 23 03:54:20.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:20.698: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:20.698: INFO: stdout: ""
Oct 23 03:54:21.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:21.688: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:21.688: INFO: stdout: ""
Oct 23 03:54:22.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:22.712: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:22.712: INFO: stdout: ""
Oct 23 03:54:23.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:23.699: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:23.699: INFO: stdout: ""
Oct 23 03:54:24.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:24.684: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:24.684: INFO: stdout: ""
Oct 23 03:54:25.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:25.979: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:25.979: INFO: stdout: ""
Oct 23 03:54:26.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:26.740: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:26.741: INFO: stdout: ""
Oct 23 03:54:27.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:27.675: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:27.676: INFO: stdout: ""
Oct 23 03:54:28.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:28.664: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:28.664: INFO: stdout: ""
Oct 23 03:54:29.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:29.668: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:29.668: INFO: stdout: ""
Oct 23 03:54:30.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:30.666: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:30.666: INFO: stdout: ""
Oct 23 03:54:31.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:31.662: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:31.662: INFO: stdout: ""
Oct 23 03:54:32.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:32.666: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:32.666: INFO: stdout: ""
Oct 23 03:54:33.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:33.933: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:33.933: INFO: stdout: ""
Oct 23 03:54:34.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:34.754: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:34.754: INFO: stdout: ""
Oct 23 03:54:35.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:35.681: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:35.681: INFO: stdout: ""
Oct 23 03:54:36.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:36.667: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:36.667: INFO: stdout: ""
Oct 23 03:54:37.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:37.672: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:37.672: INFO: stdout: ""
Oct 23 03:54:38.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:38.658: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:38.658: INFO: stdout: ""
Oct 23 03:54:39.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:39.661: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:39.661: INFO: stdout: ""
Oct 23 03:54:40.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:40.701: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:40.701: INFO: stdout: ""
Oct 23 03:54:41.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:41.674: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:41.674: INFO: stdout: ""
Oct 23 03:54:42.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:42.674: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:42.674: INFO: stdout: ""
Oct 23 03:54:43.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:43.680: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:43.680: INFO: stdout: ""
Oct 23 03:54:44.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:44.711: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:44.711: INFO: stdout: ""
Oct 23 03:54:45.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:45.668: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:45.668: INFO: stdout: ""
Oct 23 03:54:46.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:46.692: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:46.692: INFO: stdout: ""
Oct 23 03:54:47.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:47.672: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:47.672: INFO: stdout: ""
Oct 23 03:54:48.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:48.658: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:48.658: INFO: stdout: ""
Oct 23 03:54:49.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:49.898: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:49.898: INFO: stdout: ""
Oct 23 03:54:50.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:50.701: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:50.701: INFO: stdout: ""
Oct 23 03:54:51.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:51.688: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:51.688: INFO: stdout: ""
Oct 23 03:54:52.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:52.683: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:52.683: INFO: stdout: ""
Oct 23 03:54:53.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:53.705: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:53.705: INFO: stdout: ""
Oct 23 03:54:54.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:54.669: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:54.669: INFO: stdout: ""
Oct 23 03:54:55.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:55.679: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:55.680: INFO: stdout: ""
Oct 23 03:54:56.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:56.671: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:56.671: INFO: stdout: ""
Oct 23 03:54:57.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:57.660: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:57.660: INFO: stdout: ""
Oct 23 03:54:58.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:58.925: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:58.925: INFO: stdout: ""
Oct 23 03:54:59.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:54:59.665: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:54:59.666: INFO: stdout: ""
Oct 23 03:55:00.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:00.673: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:00.673: INFO: stdout: ""
Oct 23 03:55:01.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:01.673: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:01.673: INFO: stdout: ""
Oct 23 03:55:02.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:02.639: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:02.639: INFO: stdout: ""
Oct 23 03:55:03.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:03.669: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:03.669: INFO: stdout: ""
Oct 23 03:55:04.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:04.692: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:04.692: INFO: stdout: ""
Oct 23 03:55:05.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:05.650: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:05.650: INFO: stdout: ""
Oct 23 03:55:06.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:06.711: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:06.711: INFO: stdout: ""
Oct 23 03:55:07.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:07.670: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:07.670: INFO: stdout: ""
Oct 23 03:55:08.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:08.699: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:08.699: INFO: stdout: ""
Oct 23 03:55:09.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:09.673: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:09.673: INFO: stdout: ""
Oct 23 03:55:10.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:10.660: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:10.660: INFO: stdout: ""
Oct 23 03:55:11.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:11.672: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:11.672: INFO: stdout: ""
Oct 23 03:55:12.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:12.675: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:12.675: INFO: stdout: ""
Oct 23 03:55:13.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:13.646: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:13.646: INFO: stdout: ""
Oct 23 03:55:14.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:14.659: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:14.659: INFO: stdout: ""
Oct 23 03:55:15.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:15.672: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:15.672: INFO: stdout: ""
Oct 23 03:55:16.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:16.656: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:16.656: INFO: stdout: ""
Oct 23 03:55:17.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:17.717: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:17.717: INFO: stdout: ""
Oct 23 03:55:18.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:18.664: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:18.664: INFO: stdout: ""
Oct 23 03:55:19.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:19.949: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:19.949: INFO: stdout: ""
Oct 23 03:55:20.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:20.670: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:20.670: INFO: stdout: ""
Oct 23 03:55:21.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:21.700: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:21.700: INFO: stdout: ""
Oct 23 03:55:22.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:22.682: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:22.682: INFO: stdout: ""
Oct 23 03:55:23.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:23.680: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:23.680: INFO: stdout: ""
Oct 23 03:55:24.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:24.674: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:24.674: INFO: stdout: ""
Oct 23 03:55:25.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:25.659: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:25.659: INFO: stdout: ""
Oct 23 03:55:26.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:26.668: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:26.669: INFO: stdout: ""
Oct 23 03:55:27.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:27.681: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:27.681: INFO: stdout: ""
Oct 23 03:55:28.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:28.681: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:28.681: INFO: stdout: ""
Oct 23 03:55:29.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:29.664: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:29.664: INFO: stdout: ""
Oct 23 03:55:30.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:30.676: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:30.676: INFO: stdout: ""
Oct 23 03:55:31.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:31.691: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:31.691: INFO: stdout: ""
Oct 23 03:55:32.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:32.686: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:32.686: INFO: stdout: ""
Oct 23 03:55:33.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:33.654: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:33.654: INFO: stdout: ""
Oct 23 03:55:34.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:34.667: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:34.667: INFO: stdout: ""
Oct 23 03:55:35.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:35.665: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:35.666: INFO: stdout: ""
Oct 23 03:55:36.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:36.687: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:36.687: INFO: stdout: ""
Oct 23 03:55:37.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:37.678: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:37.678: INFO: stdout: ""
Oct 23 03:55:38.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:38.663: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:38.663: INFO: stdout: ""
Oct 23 03:55:39.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:39.664: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:39.664: INFO: stdout: ""
Oct 23 03:55:40.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:40.680: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:40.680: INFO: stdout: ""
Oct 23 03:55:41.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:41.662: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:41.662: INFO: stdout: ""
Oct 23 03:55:42.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:42.735: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:42.735: INFO: stdout: ""
Oct 23 03:55:43.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:43.671: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:43.671: INFO: stdout: ""
Oct 23 03:55:44.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:44.706: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:44.706: INFO: stdout: ""
Oct 23 03:55:45.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:45.670: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:45.670: INFO: stdout: ""
Oct 23 03:55:46.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:46.686: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:46.686: INFO: stdout: ""
Oct 23 03:55:47.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:47.672: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:47.672: INFO: stdout: ""
Oct 23 03:55:48.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:48.671: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:48.671: INFO: stdout: ""
Oct 23 03:55:49.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:49.683: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:49.683: INFO: stdout: ""
Oct 23 03:55:50.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:50.693: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:50.693: INFO: stdout: ""
Oct 23 03:55:51.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:51.722: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:51.722: INFO: stdout: ""
Oct 23 03:55:52.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:52.705: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:52.705: INFO: stdout: ""
Oct 23 03:55:53.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:53.687: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:53.687: INFO: stdout: ""
Oct 23 03:55:54.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:54.677: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:54.677: INFO: stdout: ""
Oct 23 03:55:55.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:55.675: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:55.675: INFO: stdout: ""
Oct 23 03:55:56.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:56.685: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:56.685: INFO: stdout: ""
Oct 23 03:55:57.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:57.681: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:57.681: INFO: stdout: ""
Oct 23 03:55:58.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:58.679: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:58.679: INFO: stdout: ""
Oct 23 03:55:59.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:55:59.672: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:55:59.672: INFO: stdout: ""
Oct 23 03:56:00.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:56:00.672: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:56:00.672: INFO: stdout: ""
Oct 23 03:56:01.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:56:01.708: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:56:01.708: INFO: stdout: ""
Oct 23 03:56:02.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:56:02.655: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:56:02.655: INFO: stdout: ""
Oct 23 03:56:03.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:56:03.669: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:56:03.669: INFO: stdout: ""
Oct 23 03:56:04.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:56:04.689: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:56:04.689: INFO: stdout: ""
Oct 23 03:56:04.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-484 exec execpodttjjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-update-service 80'
Oct 23 03:56:04.958: INFO: stderr: "+ nc -v -t -w 2 nodeport-update-service 80\n+ echo hostName\nConnection to nodeport-update-service 80 port [tcp/http] succeeded!\n"
Oct 23 03:56:04.958: INFO: stdout: ""
Oct 23 03:56:04.958: FAIL: Unexpected error:
    <*errors.errorString | 0xc0049a41a0>: {
        s: "service is not reachable within 2m0s timeout on endpoint nodeport-update-service:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint nodeport-update-service:80 over TCP protocol
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.13()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245 +0x431
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001901800)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc001901800)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc001901800, 0x70e7b58)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
Oct 23 03:56:04.959: INFO: Cleaning up the updating NodePorts test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "services-484".
STEP: Found 17 events.
Oct 23 03:56:04.991: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodttjjx: { } Scheduled: Successfully assigned services-484/execpodttjjx to node2
Oct 23 03:56:04.991: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-nc97l: { } Scheduled: Successfully assigned services-484/nodeport-update-service-nc97l to node2
Oct 23 03:56:04.991: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-update-service-vbg6b: { } Scheduled: Successfully assigned services-484/nodeport-update-service-vbg6b to node2
Oct 23 03:56:04.991: INFO: At 2021-10-23 03:53:48 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-nc97l
Oct 23 03:56:04.991: INFO: At 2021-10-23 03:53:48 +0000 UTC - event for nodeport-update-service: {replication-controller } SuccessfulCreate: Created pod: nodeport-update-service-vbg6b
Oct 23 03:56:04.991: INFO: At 2021-10-23 03:53:53 +0000 UTC - event for nodeport-update-service-nc97l: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 23 03:56:04.991: INFO: At 2021-10-23 03:53:53 +0000 UTC - event for nodeport-update-service-nc97l: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 697.630176ms
Oct 23 03:56:04.991: INFO: At 2021-10-23 03:53:53 +0000 UTC - event for nodeport-update-service-vbg6b: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 23 03:56:04.991: INFO: At 2021-10-23 03:53:53 +0000 UTC - event for nodeport-update-service-vbg6b: {kubelet node2} Started: Started container nodeport-update-service
Oct 23 03:56:04.991: INFO: At 2021-10-23 03:53:53 +0000 UTC - event for nodeport-update-service-vbg6b: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 326.226466ms
Oct 23 03:56:04.991: INFO: At 2021-10-23 03:53:53 +0000 UTC - event for nodeport-update-service-vbg6b: {kubelet node2} Created: Created container nodeport-update-service
Oct 23 03:56:04.991: INFO: At 2021-10-23 03:53:54 +0000 UTC - event for nodeport-update-service-nc97l: {kubelet node2} Started: Started container nodeport-update-service
Oct 23 03:56:04.991: INFO: At 2021-10-23 03:53:54 +0000 UTC - event for nodeport-update-service-nc97l: {kubelet node2} Created: Created container nodeport-update-service
Oct 23 03:56:04.991: INFO: At 2021-10-23 03:53:59 +0000 UTC - event for execpodttjjx: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32"
Oct 23 03:56:04.992: INFO: At 2021-10-23 03:54:00 +0000 UTC - event for execpodttjjx: {kubelet node2} Started: Started container agnhost-container
Oct 23 03:56:04.992: INFO: At 2021-10-23 03:54:00 +0000 UTC - event for execpodttjjx: {kubelet node2} Created: Created container agnhost-container
Oct 23 03:56:04.992: INFO: At 2021-10-23 03:54:00 +0000 UTC - event for execpodttjjx: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 295.084984ms
Oct 23 03:56:04.994: INFO: POD                            NODE   PHASE    GRACE  CONDITIONS
Oct 23 03:56:04.994: INFO: execpodttjjx                   node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:53:57 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:54:01 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:54:01 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:53:57 +0000 UTC  }]
Oct 23 03:56:04.994: INFO: nodeport-update-service-nc97l  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:53:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:53:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:53:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:53:48 +0000 UTC  }]
Oct 23 03:56:04.994: INFO: nodeport-update-service-vbg6b  node2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:53:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:53:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:53:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:53:48 +0000 UTC  }]
Oct 23 03:56:04.994: INFO: 
Oct 23 03:56:04.999: INFO: 
Logging node info for node master1
Oct 23 03:56:05.001: INFO: Node Info: &Node{ObjectMeta:{master1    1b0e9b6c-fa73-4303-880f-3c662903b3ba 146956 0 2021-10-22 21:03:37 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 03:56:01 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 03:56:01 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 03:56:01 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 03:56:01 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 23 03:56:05.002: INFO: 
Logging kubelet events for node master1
Oct 23 03:56:05.004: INFO: 
Logging pods the kubelet thinks is on node master1
Oct 23 03:56:05.025: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:56:05.025: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 23 03:56:05.025: INFO: 	Container node-exporter ready: true, restart count 0
Oct 23 03:56:05.025: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.025: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 23 03:56:05.025: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.025: INFO: 	Container kube-controller-manager ready: true, restart count 1
Oct 23 03:56:05.025: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.025: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 23 03:56:05.025: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded)
Oct 23 03:56:05.025: INFO: 	Init container install-cni ready: true, restart count 1
Oct 23 03:56:05.025: INFO: 	Container kube-flannel ready: true, restart count 1
Oct 23 03:56:05.025: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.025: INFO: 	Container kube-multus ready: true, restart count 1
Oct 23 03:56:05.025: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.025: INFO: 	Container coredns ready: true, restart count 2
Oct 23 03:56:05.025: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:56:05.025: INFO: 	Container docker-registry ready: true, restart count 0
Oct 23 03:56:05.025: INFO: 	Container nginx ready: true, restart count 0
Oct 23 03:56:05.025: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.025: INFO: 	Container kube-scheduler ready: true, restart count 0
W1023 03:56:05.040611      24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 23 03:56:05.114: INFO: 
Latency metrics for node master1
Oct 23 03:56:05.114: INFO: 
Logging node info for node master2
Oct 23 03:56:05.116: INFO: Node Info: &Node{ObjectMeta:{master2    48070097-b11c-473d-9240-f4ee02bd7e2f 146962 0 2021-10-22 21:04:08 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 03:56:04 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 03:56:04 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 03:56:04 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 03:56:04 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 23 03:56:05.117: INFO: 
Logging kubelet events for node master2
Oct 23 03:56:05.121: INFO: 
Logging pods the kubelet thinks is on node master2
Oct 23 03:56:05.127: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.127: INFO: 	Container autoscaler ready: true, restart count 1
Oct 23 03:56:05.127: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:56:05.127: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 23 03:56:05.127: INFO: 	Container node-exporter ready: true, restart count 0
Oct 23 03:56:05.127: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.127: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 23 03:56:05.127: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.127: INFO: 	Container kube-scheduler ready: true, restart count 2
Oct 23 03:56:05.127: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.127: INFO: 	Container kube-proxy ready: true, restart count 2
Oct 23 03:56:05.127: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded)
Oct 23 03:56:05.127: INFO: 	Init container install-cni ready: true, restart count 2
Oct 23 03:56:05.127: INFO: 	Container kube-flannel ready: true, restart count 1
Oct 23 03:56:05.127: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.127: INFO: 	Container kube-multus ready: true, restart count 1
Oct 23 03:56:05.127: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.127: INFO: 	Container kube-controller-manager ready: true, restart count 2
W1023 03:56:05.140588      24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 23 03:56:05.205: INFO: 
Latency metrics for node master2
Oct 23 03:56:05.205: INFO: 
Logging node info for node master3
Oct 23 03:56:05.208: INFO: Node Info: &Node{ObjectMeta:{master3    fe22a467-e2de-4b64-9399-d274e6d13231 146948 0 2021-10-22 21:04:18 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:59 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:59 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:59 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 03:55:59 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 23 03:56:05.208: INFO: 
Logging kubelet events for node master3
Oct 23 03:56:05.210: INFO: 
Logging pods the kubelet thinks is on node master3
Oct 23 03:56:05.220: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.220: INFO: 	Container coredns ready: true, restart count 2
Oct 23 03:56:05.220: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.221: INFO: 	Container kube-scheduler ready: true, restart count 2
Oct 23 03:56:05.221: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.221: INFO: 	Container kube-multus ready: true, restart count 1
Oct 23 03:56:05.221: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.221: INFO: 	Container kube-proxy ready: true, restart count 1
Oct 23 03:56:05.221: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded)
Oct 23 03:56:05.221: INFO: 	Init container install-cni ready: true, restart count 1
Oct 23 03:56:05.221: INFO: 	Container kube-flannel ready: true, restart count 1
Oct 23 03:56:05.221: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.221: INFO: 	Container nfd-controller ready: true, restart count 0
Oct 23 03:56:05.221: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:56:05.221: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 23 03:56:05.221: INFO: 	Container node-exporter ready: true, restart count 0
Oct 23 03:56:05.221: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.221: INFO: 	Container kube-apiserver ready: true, restart count 0
Oct 23 03:56:05.221: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.221: INFO: 	Container kube-controller-manager ready: true, restart count 2
W1023 03:56:05.235970      24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 23 03:56:05.304: INFO: 
Latency metrics for node master3
Oct 23 03:56:05.304: INFO: 
Logging node info for node node1
Oct 23 03:56:05.307: INFO: Node Info: &Node{ObjectMeta:{node1    1c590bf6-8845-4681-8fa1-7acc55183d29 146959 0 2021-10-22 21:05:23 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 02:09:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 03:56:02 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 03:56:02 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 03:56:02 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 03:56:02 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 23 03:56:05.309: INFO: 
Logging kubelet events for node node1
Oct 23 03:56:05.311: INFO: 
Logging pods the kubelet thinks is on node node1
Oct 23 03:56:05.327: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.327: INFO: 	Container nginx-proxy ready: true, restart count 2
Oct 23 03:56:05.327: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded)
Oct 23 03:56:05.327: INFO: 	Init container install-cni ready: true, restart count 2
Oct 23 03:56:05.327: INFO: 	Container kube-flannel ready: true, restart count 3
Oct 23 03:56:05.327: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.327: INFO: 	Container kubernetes-metrics-scraper ready: true, restart count 1
Oct 23 03:56:05.327: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:56:05.327: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 23 03:56:05.327: INFO: 	Container node-exporter ready: true, restart count 0
Oct 23 03:56:05.327: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.327: INFO: 	Container kube-sriovdp ready: true, restart count 0
Oct 23 03:56:05.327: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.327: INFO: 	Container kubernetes-dashboard ready: true, restart count 1
Oct 23 03:56:05.327: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded)
Oct 23 03:56:05.327: INFO: 	Container config-reloader ready: true, restart count 0
Oct 23 03:56:05.327: INFO: 	Container custom-metrics-apiserver ready: true, restart count 0
Oct 23 03:56:05.327: INFO: 	Container grafana ready: true, restart count 0
Oct 23 03:56:05.327: INFO: 	Container prometheus ready: true, restart count 1
Oct 23 03:56:05.327: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded)
Oct 23 03:56:05.327: INFO: 	Container collectd ready: true, restart count 0
Oct 23 03:56:05.327: INFO: 	Container collectd-exporter ready: true, restart count 0
Oct 23 03:56:05.327: INFO: 	Container rbac-proxy ready: true, restart count 0
Oct 23 03:56:05.327: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:56:05.327: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 23 03:56:05.327: INFO: 	Container prometheus-operator ready: true, restart count 0
Oct 23 03:56:05.327: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.327: INFO: 	Container kube-proxy ready: true, restart count 2
Oct 23 03:56:05.327: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.327: INFO: 	Container kube-multus ready: true, restart count 1
Oct 23 03:56:05.327: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.327: INFO: 	Container nfd-worker ready: true, restart count 0
Oct 23 03:56:05.327: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded)
Oct 23 03:56:05.327: INFO: 	Container discover ready: false, restart count 0
Oct 23 03:56:05.327: INFO: 	Container init ready: false, restart count 0
Oct 23 03:56:05.327: INFO: 	Container install ready: false, restart count 0
Oct 23 03:56:05.327: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:56:05.327: INFO: 	Container nodereport ready: true, restart count 0
Oct 23 03:56:05.327: INFO: 	Container reconcile ready: true, restart count 0
W1023 03:56:05.341985      24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 23 03:56:05.502: INFO: 
Latency metrics for node node1
Oct 23 03:56:05.502: INFO: 
Logging node info for node node2
Oct 23 03:56:05.505: INFO: Node Info: &Node{ObjectMeta:{node2    bdba54c1-d4eb-4c09-a343-50f320ccb048 146945 0 2021-10-22 21:05:23 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 02:09:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:57 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:57 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 03:55:57 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 03:55:57 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Oct 23 03:56:05.505: INFO: 
Logging kubelet events for node node2
Oct 23 03:56:05.508: INFO: 
Logging pods the kubelet thinks is on node node2
Oct 23 03:56:05.525: INFO: nodeport-update-service-vbg6b started at 2021-10-23 03:53:48 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.525: INFO: 	Container nodeport-update-service ready: true, restart count 0
Oct 23 03:56:05.525: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.525: INFO: 	Container kube-proxy ready: true, restart count 2
Oct 23 03:56:05.525: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded)
Oct 23 03:56:05.525: INFO: 	Container discover ready: false, restart count 0
Oct 23 03:56:05.525: INFO: 	Container init ready: false, restart count 0
Oct 23 03:56:05.525: INFO: 	Container install ready: false, restart count 0
Oct 23 03:56:05.525: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.525: INFO: 	Container cmk-webhook ready: true, restart count 0
Oct 23 03:56:05.525: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.525: INFO: 	Container kube-multus ready: true, restart count 1
Oct 23 03:56:05.525: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.525: INFO: 	Container nfd-worker ready: true, restart count 0
Oct 23 03:56:05.525: INFO: execpodttjjx started at 2021-10-23 03:53:57 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.525: INFO: 	Container agnhost-container ready: true, restart count 0
Oct 23 03:56:05.525: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.525: INFO: 	Container kube-sriovdp ready: true, restart count 0
Oct 23 03:56:05.525: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:56:05.525: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
Oct 23 03:56:05.525: INFO: 	Container node-exporter ready: true, restart count 0
Oct 23 03:56:05.525: INFO: nodeport-update-service-nc97l started at 2021-10-23 03:53:48 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.525: INFO: 	Container nodeport-update-service ready: true, restart count 0
Oct 23 03:56:05.525: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded)
Oct 23 03:56:05.525: INFO: 	Init container install-cni ready: true, restart count 1
Oct 23 03:56:05.525: INFO: 	Container kube-flannel ready: true, restart count 2
Oct 23 03:56:05.525: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.525: INFO: 	Container tas-extender ready: true, restart count 0
Oct 23 03:56:05.525: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded)
Oct 23 03:56:05.525: INFO: 	Container collectd ready: true, restart count 0
Oct 23 03:56:05.525: INFO: 	Container collectd-exporter ready: true, restart count 0
Oct 23 03:56:05.525: INFO: 	Container rbac-proxy ready: true, restart count 0
Oct 23 03:56:05.525: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded)
Oct 23 03:56:05.525: INFO: 	Container nginx-proxy ready: true, restart count 2
Oct 23 03:56:05.525: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded)
Oct 23 03:56:05.525: INFO: 	Container nodereport ready: true, restart count 1
Oct 23 03:56:05.525: INFO: 	Container reconcile ready: true, restart count 0
W1023 03:56:05.536765      24 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Oct 23 03:56:05.736: INFO: 
Latency metrics for node node2
Oct 23 03:56:05.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-484" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750


• Failure [137.721 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211

  Oct 23 03:56:04.958: Unexpected error:
      <*errors.errorString | 0xc0049a41a0>: {
          s: "service is not reachable within 2m0s timeout on endpoint nodeport-update-service:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint nodeport-update-service:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245
------------------------------
{"msg":"FAILED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":3,"skipped":524,"failed":1,"failures":["[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
Oct 23 03:56:05.751: INFO: Running AfterSuite actions on all nodes


Oct 23 03:54:03.245: INFO: Running AfterSuite actions on all nodes
Oct 23 03:56:05.785: INFO: Running AfterSuite actions on node 1
Oct 23 03:56:05.785: INFO: Skipping dumping logs from cluster



Summarizing 2 Failures:

[Fail] [sig-network] Conntrack [It] should be able to preserve UDP traffic when server pod cycles for a NodePort service 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1245

Ran 27 of 5770 Specs in 257.259 seconds
FAIL! -- 25 Passed | 2 Failed | 0 Pending | 5743 Skipped


Ginkgo ran 1 suite in 4m18.928097487s
Test Suite Failed