I0526 22:35:58.851441 17 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0526 22:35:58.851591 17 e2e.go:129] Starting e2e run "c89d1440-c926-4573-9b68-4125821e238a" on Ginkgo node 1 {"msg":"Test Suite starting","total":1,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1622068557 - Will randomize all specs Will run 1 of 5668 specs May 26 22:35:58.862: INFO: >>> kubeConfig: /root/.kube/config May 26 22:35:58.866: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 26 22:35:58.900: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 26 22:35:58.950: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 26 22:35:58.950: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 26 22:35:58.950: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 26 22:35:58.966: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 26 22:35:58.966: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 26 22:35:58.966: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 26 22:35:58.966: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 26 22:35:58.966: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 26 22:35:58.966: INFO: e2e test version: v1.20.7 May 26 22:35:58.967: INFO: kube-apiserver version: v1.20.7 May 26 22:35:58.967: INFO: >>> kubeConfig: /root/.kube/config May 26 22:35:58.972: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 26 22:35:59.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl May 26 22:35:59.038: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 26 22:35:59.044: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating all guestbook components May 26 22:35:59.047: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend May 26 22:35:59.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3669 create -f -' May 26 22:35:59.492: INFO: stderr: "" May 26 22:35:59.492: INFO: stdout: "service/agnhost-replica created\n" May 26 22:35:59.492: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend May 26 22:35:59.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3669 create -f -' May 26 22:35:59.775: INFO: stderr: "" May 26 22:35:59.775: INFO: stdout: "service/agnhost-primary created\n" May 26 22:35:59.775: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 26 22:35:59.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3669 create -f -' May 26 22:36:00.034: INFO: stderr: "" May 26 22:36:00.034: INFO: stdout: "service/frontend created\n" May 26 22:36:00.034: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 26 22:36:00.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3669 create -f -' May 26 22:36:00.372: INFO: stderr: "" May 26 22:36:00.372: INFO: stdout: "deployment.apps/frontend created\n" May 26 22:36:00.372: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 26 22:36:00.372: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3669 create -f -' May 26 22:36:00.667: INFO: stderr: "" May 26 22:36:00.667: INFO: stdout: "deployment.apps/agnhost-primary created\n" May 26 22:36:00.668: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 26 22:36:00.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3669 create -f -' May 26 22:36:00.954: INFO: stderr: "" May 26 22:36:00.954: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app May 26 22:36:00.954: INFO: Waiting for all frontend pods to be Running. May 26 22:36:06.004: INFO: Waiting for frontend to serve content. May 26 22:36:11.014: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:36:21.028: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:36:31.036: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:36:41.045: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:36:51.055: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:37:01.063: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:37:11.070: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:37:21.078: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:37:31.085: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:37:41.092: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:37:51.100: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:38:01.108: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:38:11.117: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:38:21.125: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:38:31.135: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:38:41.143: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:38:51.150: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:39:01.158: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:39:11.166: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:39:21.173: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:39:31.181: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:39:41.188: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:39:51.196: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:40:01.204: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:40:11.214: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:40:21.223: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:40:31.231: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:40:41.238: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:40:51.247: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:41:01.255: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:41:11.263: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:41:21.272: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:41:31.280: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:41:41.287: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:41:51.294: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:42:01.303: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:42:11.320: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:42:21.328: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:42:31.336: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:42:41.344: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:42:51.352: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:43:01.360: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:43:11.368: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:43:21.375: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:43:31.383: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:43:41.391: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:43:51.398: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:44:01.406: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:44:11.428: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:44:21.435: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:44:31.443: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:44:41.451: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:44:51.459: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:45:01.467: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:45:11.475: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:45:21.483: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:45:31.491: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:45:41.498: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:45:51.506: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:46:01.515: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: May 26 22:46:06.516: FAIL: Frontend service did not start serving content in 600 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.7.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:378 +0x159 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002b3b500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002b3b500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002b3b500, 0x4fbaa38) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 STEP: using delete to clean up resources May 26 22:46:06.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3669 delete --grace-period=0 --force -f -' May 26 22:46:06.846: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 22:46:06.846: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources May 26 22:46:06.846: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3669 delete --grace-period=0 --force -f -' May 26 22:46:07.042: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 22:46:07.043: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources May 26 22:46:07.043: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3669 delete --grace-period=0 --force -f -' May 26 22:46:07.171: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 22:46:07.171: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 26 22:46:07.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3669 delete --grace-period=0 --force -f -' May 26 22:46:07.286: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 22:46:07.286: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 26 22:46:07.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3669 delete --grace-period=0 --force -f -' May 26 22:46:07.406: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 22:46:07.406: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources May 26 22:46:07.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:44097 --kubeconfig=/root/.kube/config --namespace=kubectl-3669 delete --grace-period=0 --force -f -' May 26 22:46:07.526: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 22:46:07.526: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "kubectl-3669". STEP: Found 41 events. May 26 22:46:07.532: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for agnhost-primary: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-primary-56857545d9 to 1 May 26 22:46:07.532: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for agnhost-primary-56857545d9: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-primary-56857545d9-n6dtf May 26 22:46:07.532: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for agnhost-primary-56857545d9-n6dtf: {default-scheduler } Scheduled: Successfully assigned kubectl-3669/agnhost-primary-56857545d9-n6dtf to leguer-worker May 26 22:46:07.532: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for agnhost-replica: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-replica-55fd9c5577 to 2 May 26 22:46:07.533: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for agnhost-replica-55fd9c5577: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-replica-55fd9c5577-qfnn7 May 26 22:46:07.533: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for agnhost-replica-55fd9c5577: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-replica-55fd9c5577-wrlmh May 26 22:46:07.533: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for agnhost-replica-55fd9c5577-qfnn7: {default-scheduler } Scheduled: Successfully assigned kubectl-3669/agnhost-replica-55fd9c5577-qfnn7 to leguer-worker May 26 22:46:07.533: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for agnhost-replica-55fd9c5577-wrlmh: {default-scheduler } Scheduled: Successfully assigned kubectl-3669/agnhost-replica-55fd9c5577-wrlmh to leguer-worker May 26 22:46:07.533: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-7659f66489 to 3 May 26 22:46:07.533: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for frontend-7659f66489: {replicaset-controller } SuccessfulCreate: Created pod: frontend-7659f66489-8jmrq May 26 22:46:07.533: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for frontend-7659f66489: {replicaset-controller } SuccessfulCreate: Created pod: frontend-7659f66489-4h25m May 26 22:46:07.533: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for frontend-7659f66489: {replicaset-controller } SuccessfulCreate: Created pod: frontend-7659f66489-v25pv May 26 22:46:07.533: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for frontend-7659f66489-4h25m: {default-scheduler } Scheduled: Successfully assigned kubectl-3669/frontend-7659f66489-4h25m to leguer-worker May 26 22:46:07.533: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for frontend-7659f66489-4h25m: {multus } AddedInterface: Add eth0 [10.244.1.215/24] May 26 22:46:07.533: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for frontend-7659f66489-8jmrq: {multus } AddedInterface: Add eth0 [10.244.1.214/24] May 26 22:46:07.533: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for frontend-7659f66489-8jmrq: {default-scheduler } Scheduled: Successfully assigned kubectl-3669/frontend-7659f66489-8jmrq to leguer-worker May 26 22:46:07.533: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for frontend-7659f66489-v25pv: {multus } AddedInterface: Add eth0 [10.244.1.216/24] May 26 22:46:07.533: INFO: At 2021-05-26 22:36:00 +0000 UTC - event for frontend-7659f66489-v25pv: {default-scheduler } Scheduled: Successfully assigned kubectl-3669/frontend-7659f66489-v25pv to leguer-worker May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for agnhost-primary-56857545d9-n6dtf: {kubelet leguer-worker} Started: Started container primary May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for agnhost-primary-56857545d9-n6dtf: {kubelet leguer-worker} Created: Created container primary May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for agnhost-primary-56857545d9-n6dtf: {kubelet leguer-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for agnhost-primary-56857545d9-n6dtf: {multus } AddedInterface: Add eth0 [10.244.1.217/24] May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for agnhost-replica-55fd9c5577-qfnn7: {kubelet leguer-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for agnhost-replica-55fd9c5577-qfnn7: {multus } AddedInterface: Add eth0 [10.244.1.218/24] May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for agnhost-replica-55fd9c5577-qfnn7: {kubelet leguer-worker} Started: Started container replica May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for agnhost-replica-55fd9c5577-qfnn7: {kubelet leguer-worker} Created: Created container replica May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for agnhost-replica-55fd9c5577-wrlmh: {kubelet leguer-worker} Started: Started container replica May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for agnhost-replica-55fd9c5577-wrlmh: {kubelet leguer-worker} Created: Created container replica May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for agnhost-replica-55fd9c5577-wrlmh: {multus } AddedInterface: Add eth0 [10.244.1.219/24] May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for agnhost-replica-55fd9c5577-wrlmh: {kubelet leguer-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for frontend-7659f66489-4h25m: {kubelet leguer-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for frontend-7659f66489-4h25m: {kubelet leguer-worker} Started: Started container guestbook-frontend May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for frontend-7659f66489-4h25m: {kubelet leguer-worker} Created: Created container guestbook-frontend May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for frontend-7659f66489-8jmrq: {kubelet leguer-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for frontend-7659f66489-8jmrq: {kubelet leguer-worker} Created: Created container guestbook-frontend May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for frontend-7659f66489-8jmrq: {kubelet leguer-worker} Started: Started container guestbook-frontend May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for frontend-7659f66489-v25pv: {kubelet leguer-worker} Created: Created container guestbook-frontend May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for frontend-7659f66489-v25pv: {kubelet leguer-worker} Started: Started container guestbook-frontend May 26 22:46:07.533: INFO: At 2021-05-26 22:36:01 +0000 UTC - event for frontend-7659f66489-v25pv: {kubelet leguer-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.21" already present on machine May 26 22:46:07.533: INFO: At 2021-05-26 22:36:39 +0000 UTC - event for agnhost-replica-55fd9c5577-qfnn7: {kubelet leguer-worker} BackOff: Back-off restarting failed container May 26 22:46:07.533: INFO: At 2021-05-26 22:36:39 +0000 UTC - event for agnhost-replica-55fd9c5577-wrlmh: {kubelet leguer-worker} BackOff: Back-off restarting failed container May 26 22:46:07.537: INFO: POD NODE PHASE GRACE CONDITIONS May 26 22:46:07.537: INFO: agnhost-primary-56857545d9-n6dtf leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:00 +0000 UTC }] May 26 22:46:07.537: INFO: agnhost-replica-55fd9c5577-qfnn7 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:43:34 +0000 UTC ContainersNotReady containers with unready status: [replica]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:43:34 +0000 UTC ContainersNotReady containers with unready status: [replica]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:00 +0000 UTC }] May 26 22:46:07.537: INFO: agnhost-replica-55fd9c5577-wrlmh leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:43:56 +0000 UTC ContainersNotReady containers with unready status: [replica]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:43:56 +0000 UTC ContainersNotReady containers with unready status: [replica]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:00 +0000 UTC }] May 26 22:46:07.537: INFO: frontend-7659f66489-4h25m leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:00 +0000 UTC }] May 26 22:46:07.537: INFO: frontend-7659f66489-8jmrq leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:00 +0000 UTC }] May 26 22:46:07.537: INFO: frontend-7659f66489-v25pv leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-26 22:36:00 +0000 UTC }] May 26 22:46:07.537: INFO: May 26 22:46:07.541: INFO: Logging node info for node leguer-control-plane May 26 22:46:07.544: INFO: Node Info: &Node{ObjectMeta:{leguer-control-plane 6d457de0-9a0f-4ff6-bd75-0bbc1430a694 1667622 0 2021-05-22 08:23:02 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux ingress-ready:true kubernetes.io/arch:amd64 kubernetes.io/hostname:leguer-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-22 08:23:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:ingress-ready":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-22 08:23:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-05-22 08:23:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/leguer/leguer-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-26 22:43:27 +0000 UTC,LastTransitionTime:2021-05-22 08:22:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-26 22:43:27 +0000 UTC,LastTransitionTime:2021-05-22 08:22:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-26 22:43:27 +0000 UTC,LastTransitionTime:2021-05-22 08:22:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-26 22:43:27 +0000 UTC,LastTransitionTime:2021-05-22 08:23:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.6,},NodeAddress{Type:Hostname,Address:leguer-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cd6232015d5d4123a4f981fce21e3374,SystemUUID:eba32c45-894e-4080-80ed-6ad2fd75cb06,BootID:8e840902-9ac1-4acc-b00a-3731226c7bea,KernelVersion:5.4.0-73-generic,OSImage:Ubuntu 20.10,ContainerRuntimeVersion:containerd://1.5.1,KubeletVersion:v1.20.7,KubeProxyVersion:v1.20.7,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.7],SizeBytes:122987857,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.7],SizeBytes:120339943,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.7],SizeBytes:117523811,},ContainerImage{Names:[ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed ghcr.io/k8snetworkplumbingwg/multus-cni:stable],SizeBytes:104808100,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/kubernetesui/dashboard@sha256:148991563e374c83b75e8c51bca75f512d4f006ddc791e96a91f1c7420b60bd9 docker.io/kubernetesui/dashboard:v2.2.0],SizeBytes:67775224,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210326-1e038dc5],SizeBytes:53960776,},ContainerImage{Names:[docker.io/envoyproxy/envoy@sha256:55d35e368436519136dbd978fa0682c49d8ab99e4d768413510f226762b30b07 docker.io/envoyproxy/envoy:v1.18.3],SizeBytes:51364868,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.7],SizeBytes:48502094,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[quay.io/metallb/speaker@sha256:c150c6a26a29de43097918f08551bbd2a80de229225866a3814594798089e51c quay.io/metallb/speaker:main],SizeBytes:39322460,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:21086532,},ContainerImage{Names:[docker.io/kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 docker.io/kubernetesui/metrics-scraper:v1.0.6],SizeBytes:15079854,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:13367922,},ContainerImage{Names:[docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e docker.io/projectcontour/contour:v1.15.1],SizeBytes:11888781,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 26 22:46:07.545: INFO: Logging kubelet events for node leguer-control-plane May 26 22:46:07.549: INFO: Logging pods the kubelet thinks is on node leguer-control-plane May 26 22:46:07.574: INFO: kube-controller-manager-leguer-control-plane started at 2021-05-22 08:23:17 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.574: INFO: Container kube-controller-manager ready: true, restart count 0 May 26 22:46:07.574: INFO: speaker-gjr9t started at 2021-05-22 08:23:45 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.574: INFO: Container speaker ready: true, restart count 0 May 26 22:46:07.574: INFO: kubernetes-dashboard-9f9799597-x8tx5 started at 2021-05-22 08:23:47 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.574: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 26 22:46:07.574: INFO: dashboard-metrics-scraper-79c5968bdc-krkfj started at 2021-05-22 08:23:47 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.574: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 26 22:46:07.574: INFO: kube-multus-ds-bxrtj started at 2021-05-22 08:23:44 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.574: INFO: Container kube-multus ready: true, restart count 2 May 26 22:46:07.574: INFO: envoy-nwdcq started at 2021-05-22 08:23:46 +0000 UTC (1+2 container statuses recorded) May 26 22:46:07.574: INFO: Init container envoy-initconfig ready: true, restart count 0 May 26 22:46:07.574: INFO: Container envoy ready: true, restart count 0 May 26 22:46:07.574: INFO: Container shutdown-manager ready: true, restart count 0 May 26 22:46:07.574: INFO: kube-scheduler-leguer-control-plane started at 2021-05-22 08:23:17 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.574: INFO: Container kube-scheduler ready: true, restart count 0 May 26 22:46:07.574: INFO: kube-proxy-vqm28 started at 2021-05-22 08:23:20 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.574: INFO: Container kube-proxy ready: true, restart count 0 May 26 22:46:07.574: INFO: create-loop-devs-dxl2f started at 2021-05-22 08:23:43 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.574: INFO: Container loopdev ready: true, restart count 0 May 26 22:46:07.574: INFO: tune-sysctls-s5nrx started at 2021-05-22 08:23:44 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.574: INFO: Container setsysctls ready: true, restart count 0 May 26 22:46:07.574: INFO: etcd-leguer-control-plane started at 2021-05-22 08:23:17 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.574: INFO: Container etcd ready: true, restart count 0 May 26 22:46:07.574: INFO: kube-apiserver-leguer-control-plane started at 2021-05-22 08:23:17 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.574: INFO: Container kube-apiserver ready: true, restart count 0 May 26 22:46:07.574: INFO: kindnet-8gg6p started at 2021-05-22 08:23:20 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.574: INFO: Container kindnet-cni ready: true, restart count 23 May 26 22:46:07.574: INFO: local-path-provisioner-547f784dff-pbsvl started at 2021-05-22 08:23:41 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.575: INFO: Container local-path-provisioner ready: true, restart count 0 W0526 22:46:07.581269 17 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 26 22:46:07.762: INFO: Latency metrics for node leguer-control-plane May 26 22:46:07.762: INFO: Logging node info for node leguer-worker May 26 22:46:07.766: INFO: Node Info: &Node{ObjectMeta:{leguer-worker a0394caa-d22f-452e-99cd-7356a6b84552 1667597 0 2021-05-22 08:23:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:leguer-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1043":"csi-mock-csi-mock-volumes-1043","csi-mock-csi-mock-volumes-1206":"csi-mock-csi-mock-volumes-1206","csi-mock-csi-mock-volumes-1231":"csi-mock-csi-mock-volumes-1231","csi-mock-csi-mock-volumes-1333":"csi-mock-csi-mock-volumes-1333","csi-mock-csi-mock-volumes-1684":"csi-mock-csi-mock-volumes-1684","csi-mock-csi-mock-volumes-1709":"csi-mock-csi-mock-volumes-1709","csi-mock-csi-mock-volumes-1826":"csi-mock-csi-mock-volumes-1826","csi-mock-csi-mock-volumes-1957":"csi-mock-csi-mock-volumes-1957","csi-mock-csi-mock-volumes-2039":"csi-mock-csi-mock-volumes-2039","csi-mock-csi-mock-volumes-2104":"csi-mock-csi-mock-volumes-2104","csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2262":"csi-mock-csi-mock-volumes-2262","csi-mock-csi-mock-volumes-2573":"csi-mock-csi-mock-volumes-2573","csi-mock-csi-mock-volumes-2582":"csi-mock-csi-mock-volumes-2582","csi-mock-csi-mock-volumes-264":"csi-mock-csi-mock-volumes-264","csi-mock-csi-mock-volumes-2708":"csi-mock-csi-mock-volumes-2708","csi-mock-csi-mock-volumes-2709":"csi-mock-csi-mock-volumes-2709","csi-mock-csi-mock-volumes-2834":"csi-mock-csi-mock-volumes-2834","csi-mock-csi-mock-volumes-3239":"csi-mock-csi-mock-volumes-3239","csi-mock-csi-mock-volumes-3358":"csi-mock-csi-mock-volumes-3358","csi-mock-csi-mock-volumes-3397":"csi-mock-csi-mock-volumes-3397","csi-mock-csi-mock-volumes-3429":"csi-mock-csi-mock-volumes-3429","csi-mock-csi-mock-volumes-3688":"csi-mock-csi-mock-volumes-3688","csi-mock-csi-mock-volumes-3826":"csi-mock-csi-mock-volumes-3826","csi-mock-csi-mock-volumes-3868":"csi-mock-csi-mock-volumes-3868","csi-mock-csi-mock-volumes-4016":"csi-mock-csi-mock-volumes-4016","csi-mock-csi-mock-volumes-4241":"csi-mock-csi-mock-volumes-4241","csi-mock-csi-mock-volumes-4356":"csi-mock-csi-mock-volumes-4356","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4572":"csi-mock-csi-mock-volumes-4572","csi-mock-csi-mock-volumes-4622":"csi-mock-csi-mock-volumes-4622","csi-mock-csi-mock-volumes-4721":"csi-mock-csi-mock-volumes-4721","csi-mock-csi-mock-volumes-476":"csi-mock-csi-mock-volumes-476","csi-mock-csi-mock-volumes-4796":"csi-mock-csi-mock-volumes-4796","csi-mock-csi-mock-volumes-4881":"csi-mock-csi-mock-volumes-4881","csi-mock-csi-mock-volumes-5044":"csi-mock-csi-mock-volumes-5044","csi-mock-csi-mock-volumes-5066":"csi-mock-csi-mock-volumes-5066","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5151":"csi-mock-csi-mock-volumes-5151","csi-mock-csi-mock-volumes-5192":"csi-mock-csi-mock-volumes-5192","csi-mock-csi-mock-volumes-5458":"csi-mock-csi-mock-volumes-5458","csi-mock-csi-mock-volumes-5479":"csi-mock-csi-mock-volumes-5479","csi-mock-csi-mock-volumes-5779":"csi-mock-csi-mock-volumes-5779","csi-mock-csi-mock-volumes-5811":"csi-mock-csi-mock-volumes-5811","csi-mock-csi-mock-volumes-5822":"csi-mock-csi-mock-volumes-5822","csi-mock-csi-mock-volumes-5852":"csi-mock-csi-mock-volumes-5852","csi-mock-csi-mock-volumes-6027":"csi-mock-csi-mock-volumes-6027","csi-mock-csi-mock-volumes-6090":"csi-mock-csi-mock-volumes-6090","csi-mock-csi-mock-volumes-6350":"csi-mock-csi-mock-volumes-6350","csi-mock-csi-mock-volumes-6748":"csi-mock-csi-mock-volumes-6748","csi-mock-csi-mock-volumes-6858":"csi-mock-csi-mock-volumes-6858","csi-mock-csi-mock-volumes-7014":"csi-mock-csi-mock-volumes-7014","csi-mock-csi-mock-volumes-7049":"csi-mock-csi-mock-volumes-7049","csi-mock-csi-mock-volumes-7063":"csi-mock-csi-mock-volumes-7063","csi-mock-csi-mock-volumes-7292":"csi-mock-csi-mock-volumes-7292","csi-mock-csi-mock-volumes-7436":"csi-mock-csi-mock-volumes-7436","csi-mock-csi-mock-volumes-7562":"csi-mock-csi-mock-volumes-7562","csi-mock-csi-mock-volumes-7711":"csi-mock-csi-mock-volumes-7711","csi-mock-csi-mock-volumes-7779":"csi-mock-csi-mock-volumes-7779","csi-mock-csi-mock-volumes-785":"csi-mock-csi-mock-volumes-785","csi-mock-csi-mock-volumes-7884":"csi-mock-csi-mock-volumes-7884","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8126":"csi-mock-csi-mock-volumes-8126","csi-mock-csi-mock-volumes-840":"csi-mock-csi-mock-volumes-840","csi-mock-csi-mock-volumes-8665":"csi-mock-csi-mock-volumes-8665","csi-mock-csi-mock-volumes-8765":"csi-mock-csi-mock-volumes-8765","csi-mock-csi-mock-volumes-8973":"csi-mock-csi-mock-volumes-8973","csi-mock-csi-mock-volumes-8985":"csi-mock-csi-mock-volumes-8985","csi-mock-csi-mock-volumes-9044":"csi-mock-csi-mock-volumes-9044","csi-mock-csi-mock-volumes-9265":"csi-mock-csi-mock-volumes-9265","csi-mock-csi-mock-volumes-9313":"csi-mock-csi-mock-volumes-9313","csi-mock-csi-mock-volumes-9717":"csi-mock-csi-mock-volumes-9717","csi-mock-csi-mock-volumes-9736":"csi-mock-csi-mock-volumes-9736","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9838":"csi-mock-csi-mock-volumes-9838","csi-mock-csi-mock-volumes-9918":"csi-mock-csi-mock-volumes-9918"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-22 08:23:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {e2e.test Update v1 2021-05-26 08:19:25 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}} {kube-controller-manager Update v1 2021-05-26 08:22:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-05-26 08:25:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/leguer/leguer-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-26 22:43:17 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-26 22:43:17 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-26 22:43:17 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-26 22:43:17 +0000 UTC,LastTransitionTime:2021-05-22 08:23:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.7,},NodeAddress{Type:Hostname,Address:leguer-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3b3190afa60a4b3f8acfa4d884b5f41e,SystemUUID:e4621450-f7e7-447f-a390-1b05f9cdaec2,BootID:8e840902-9ac1-4acc-b00a-3731226c7bea,KernelVersion:5.4.0-73-generic,OSImage:Ubuntu 20.10,ContainerRuntimeVersion:containerd://1.5.1,KubeletVersion:v1.20.7,KubeProxyVersion:v1.20.7,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[docker.io/litmuschaos/go-runner@sha256:706d69e007d61c69495dc384167c7cb242ced8b893ac8bb30bdee4367c894980 docker.io/litmuschaos/go-runner:1.13.2],SizeBytes:153211568,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.7],SizeBytes:122987857,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.7],SizeBytes:120339943,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.7],SizeBytes:117523811,},ContainerImage{Names:[ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed ghcr.io/k8snetworkplumbingwg/multus-cni:stable],SizeBytes:104808100,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[docker.io/litmuschaos/chaos-runner@sha256:806f80ccc41d7d5b33035d09bfc41bb7814f9989e738fcdefc29780934d4a663 docker.io/litmuschaos/chaos-runner:1.13.2],SizeBytes:56004602,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210326-1e038dc5],SizeBytes:53960776,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:0f30e5c1a1286a4bf6739dd8bdf1d00f0dd915474b3c62e892592277b0395986 docker.io/bitnami/kubectl:latest],SizeBytes:49444404,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.7],SizeBytes:48502094,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[quay.io/metallb/speaker@sha256:c150c6a26a29de43097918f08551bbd2a80de229225866a3814594798089e51c quay.io/metallb/speaker:main],SizeBytes:39322460,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:21086532,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:17747507,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:13367922,},ContainerImage{Names:[docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e docker.io/projectcontour/contour:v1.15.1],SizeBytes:11888781,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:8888823,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 26 22:46:07.767: INFO: Logging kubelet events for node leguer-worker May 26 22:46:07.770: INFO: Logging pods the kubelet thinks is on node leguer-worker May 26 22:46:07.797: INFO: coredns-74ff55c5b-r2mx4 started at 2021-05-26 08:09:43 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.797: INFO: Container coredns ready: true, restart count 0 May 26 22:46:07.797: INFO: tune-sysctls-v9b2d started at 2021-05-26 07:54:47 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.797: INFO: Container setsysctls ready: true, restart count 0 May 26 22:46:07.797: INFO: kindnet-svp2q started at 2021-05-22 08:23:37 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.797: INFO: Container kindnet-cni ready: true, restart count 22 May 26 22:46:07.797: INFO: agnhost-replica-55fd9c5577-wrlmh started at 2021-05-26 22:36:00 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.797: INFO: Container replica ready: false, restart count 6 May 26 22:46:07.797: INFO: kube-multus-ds-rxcnb started at 2021-05-26 07:54:50 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.797: INFO: Container kube-multus ready: true, restart count 0 May 26 22:46:07.797: INFO: coredns-74ff55c5b-5wfqs started at 2021-05-26 08:09:43 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.797: INFO: Container coredns ready: true, restart count 0 May 26 22:46:07.797: INFO: frontend-7659f66489-v25pv started at 2021-05-26 22:36:00 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.797: INFO: Container guestbook-frontend ready: true, restart count 0 May 26 22:46:07.797: INFO: chaos-daemon-tlsqn started at 2021-05-26 09:15:28 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.797: INFO: Container chaos-daemon ready: true, restart count 0 May 26 22:46:07.797: INFO: kube-proxy-7g274 started at 2021-05-22 08:23:37 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.797: INFO: Container kube-proxy ready: true, restart count 0 May 26 22:46:07.797: INFO: frontend-7659f66489-8jmrq started at 2021-05-26 22:36:00 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.797: INFO: Container guestbook-frontend ready: true, restart count 0 May 26 22:46:07.797: INFO: create-loop-devs-46wb9 started at 2021-05-26 07:55:17 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.797: INFO: Container loopdev ready: true, restart count 0 May 26 22:46:07.797: INFO: agnhost-replica-55fd9c5577-qfnn7 started at 2021-05-26 22:36:00 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.797: INFO: Container replica ready: false, restart count 6 May 26 22:46:07.797: INFO: speaker-27mw4 started at 2021-05-26 07:54:47 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.797: INFO: Container speaker ready: true, restart count 0 May 26 22:46:07.797: INFO: agnhost-primary-56857545d9-n6dtf started at 2021-05-26 22:36:00 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.797: INFO: Container primary ready: true, restart count 0 May 26 22:46:07.797: INFO: frontend-7659f66489-4h25m started at 2021-05-26 22:36:00 +0000 UTC (0+1 container statuses recorded) May 26 22:46:07.797: INFO: Container guestbook-frontend ready: true, restart count 0 W0526 22:46:07.805960 17 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 26 22:46:08.008: INFO: Latency metrics for node leguer-worker May 26 22:46:08.008: INFO: Logging node info for node leguer-worker2 May 26 22:46:08.015: INFO: Node Info: &Node{ObjectMeta:{leguer-worker2 8f8eaae4-b1b9-4593-a956-0b952e0c41c9 1667375 0 2021-05-22 08:23:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:leguer-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-101":"csi-mock-csi-mock-volumes-101","csi-mock-csi-mock-volumes-1085":"csi-mock-csi-mock-volumes-1085","csi-mock-csi-mock-volumes-1097":"csi-mock-csi-mock-volumes-1097","csi-mock-csi-mock-volumes-1188":"csi-mock-csi-mock-volumes-1188","csi-mock-csi-mock-volumes-1245":"csi-mock-csi-mock-volumes-1245","csi-mock-csi-mock-volumes-1317":"csi-mock-csi-mock-volumes-1317","csi-mock-csi-mock-volumes-1665":"csi-mock-csi-mock-volumes-1665","csi-mock-csi-mock-volumes-2611":"csi-mock-csi-mock-volumes-2611","csi-mock-csi-mock-volumes-2722":"csi-mock-csi-mock-volumes-2722","csi-mock-csi-mock-volumes-282":"csi-mock-csi-mock-volumes-282","csi-mock-csi-mock-volumes-2860":"csi-mock-csi-mock-volumes-2860","csi-mock-csi-mock-volumes-3181":"csi-mock-csi-mock-volumes-3181","csi-mock-csi-mock-volumes-3275":"csi-mock-csi-mock-volumes-3275","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3559":"csi-mock-csi-mock-volumes-3559","csi-mock-csi-mock-volumes-3596":"csi-mock-csi-mock-volumes-3596","csi-mock-csi-mock-volumes-3731":"csi-mock-csi-mock-volumes-3731","csi-mock-csi-mock-volumes-3760":"csi-mock-csi-mock-volumes-3760","csi-mock-csi-mock-volumes-3791":"csi-mock-csi-mock-volumes-3791","csi-mock-csi-mock-volumes-3993":"csi-mock-csi-mock-volumes-3993","csi-mock-csi-mock-volumes-4187":"csi-mock-csi-mock-volumes-4187","csi-mock-csi-mock-volumes-419":"csi-mock-csi-mock-volumes-419","csi-mock-csi-mock-volumes-4274":"csi-mock-csi-mock-volumes-4274","csi-mock-csi-mock-volumes-4278":"csi-mock-csi-mock-volumes-4278","csi-mock-csi-mock-volumes-4567":"csi-mock-csi-mock-volumes-4567","csi-mock-csi-mock-volumes-4902":"csi-mock-csi-mock-volumes-4902","csi-mock-csi-mock-volumes-5085":"csi-mock-csi-mock-volumes-5085","csi-mock-csi-mock-volumes-5359":"csi-mock-csi-mock-volumes-5359","csi-mock-csi-mock-volumes-5482":"csi-mock-csi-mock-volumes-5482","csi-mock-csi-mock-volumes-5902":"csi-mock-csi-mock-volumes-5902","csi-mock-csi-mock-volumes-6014":"csi-mock-csi-mock-volumes-6014","csi-mock-csi-mock-volumes-6026":"csi-mock-csi-mock-volumes-6026","csi-mock-csi-mock-volumes-6152":"csi-mock-csi-mock-volumes-6152","csi-mock-csi-mock-volumes-6258":"csi-mock-csi-mock-volumes-6258","csi-mock-csi-mock-volumes-6424":"csi-mock-csi-mock-volumes-6424","csi-mock-csi-mock-volumes-6551":"csi-mock-csi-mock-volumes-6551","csi-mock-csi-mock-volumes-661":"csi-mock-csi-mock-volumes-661","csi-mock-csi-mock-volumes-6689":"csi-mock-csi-mock-volumes-6689","csi-mock-csi-mock-volumes-6776":"csi-mock-csi-mock-volumes-6776","csi-mock-csi-mock-volumes-7182":"csi-mock-csi-mock-volumes-7182","csi-mock-csi-mock-volumes-7195":"csi-mock-csi-mock-volumes-7195","csi-mock-csi-mock-volumes-7255":"csi-mock-csi-mock-volumes-7255","csi-mock-csi-mock-volumes-7316":"csi-mock-csi-mock-volumes-7316","csi-mock-csi-mock-volumes-7364":"csi-mock-csi-mock-volumes-7364","csi-mock-csi-mock-volumes-7435":"csi-mock-csi-mock-volumes-7435","csi-mock-csi-mock-volumes-7533":"csi-mock-csi-mock-volumes-7533","csi-mock-csi-mock-volumes-7664":"csi-mock-csi-mock-volumes-7664","csi-mock-csi-mock-volumes-7768":"csi-mock-csi-mock-volumes-7768","csi-mock-csi-mock-volumes-800":"csi-mock-csi-mock-volumes-800","csi-mock-csi-mock-volumes-8090":"csi-mock-csi-mock-volumes-8090","csi-mock-csi-mock-volumes-8163":"csi-mock-csi-mock-volumes-8163","csi-mock-csi-mock-volumes-8351":"csi-mock-csi-mock-volumes-8351","csi-mock-csi-mock-volumes-8510":"csi-mock-csi-mock-volumes-8510","csi-mock-csi-mock-volumes-868":"csi-mock-csi-mock-volumes-868","csi-mock-csi-mock-volumes-8794":"csi-mock-csi-mock-volumes-8794","csi-mock-csi-mock-volumes-8875":"csi-mock-csi-mock-volumes-8875","csi-mock-csi-mock-volumes-8912":"csi-mock-csi-mock-volumes-8912","csi-mock-csi-mock-volumes-8951":"csi-mock-csi-mock-volumes-8951","csi-mock-csi-mock-volumes-9011":"csi-mock-csi-mock-volumes-9011","csi-mock-csi-mock-volumes-9167":"csi-mock-csi-mock-volumes-9167","csi-mock-csi-mock-volumes-9267":"csi-mock-csi-mock-volumes-9267","csi-mock-csi-mock-volumes-927":"csi-mock-csi-mock-volumes-927","csi-mock-csi-mock-volumes-9337":"csi-mock-csi-mock-volumes-9337","csi-mock-csi-mock-volumes-9346":"csi-mock-csi-mock-volumes-9346","csi-mock-csi-mock-volumes-9361":"csi-mock-csi-mock-volumes-9361","csi-mock-csi-mock-volumes-9453":"csi-mock-csi-mock-volumes-9453","csi-mock-csi-mock-volumes-9494":"csi-mock-csi-mock-volumes-9494","csi-mock-csi-mock-volumes-9507":"csi-mock-csi-mock-volumes-9507","csi-mock-csi-mock-volumes-9629":"csi-mock-csi-mock-volumes-9629","csi-mock-csi-mock-volumes-9836":"csi-mock-csi-mock-volumes-9836","csi-mock-csi-mock-volumes-9868":"csi-mock-csi-mock-volumes-9868"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-22 08:23:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {e2e.test Update v1 2021-05-26 08:18:11 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}} {kube-controller-manager Update v1 2021-05-26 08:24:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-05-26 08:24:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/leguer/leguer-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-26 22:41:46 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-26 22:41:46 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-26 22:41:46 +0000 UTC,LastTransitionTime:2021-05-22 08:23:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-26 22:41:46 +0000 UTC,LastTransitionTime:2021-05-22 08:23:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.5,},NodeAddress{Type:Hostname,Address:leguer-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:86c8c7b1af6542c49386440702c637be,SystemUUID:fe86f09a-28b3-4895-94ce-6312a2d07a57,BootID:8e840902-9ac1-4acc-b00a-3731226c7bea,KernelVersion:5.4.0-73-generic,OSImage:Ubuntu 20.10,ContainerRuntimeVersion:containerd://1.5.1,KubeletVersion:v1.20.7,KubeProxyVersion:v1.20.7,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[docker.io/litmuschaos/go-runner@sha256:706d69e007d61c69495dc384167c7cb242ced8b893ac8bb30bdee4367c894980 docker.io/litmuschaos/go-runner:1.13.2],SizeBytes:153211568,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.7],SizeBytes:122987857,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.7],SizeBytes:120339943,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.7],SizeBytes:117523811,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed ghcr.io/k8snetworkplumbingwg/multus-cni:stable],SizeBytes:104808100,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[docker.io/library/docker@sha256:87ed8e3a7b251eef42c2e4251f95ae3c5f8c4c0a64900f19cc532d0a42aa7107 docker.io/library/docker:dind],SizeBytes:81659525,},ContainerImage{Names:[docker.io/litmuschaos/chaos-operator@sha256:332c4eff6fb327d140edbcc4cf5be7d3afd2ce5b6883348350f2336320c79ff7 docker.io/litmuschaos/chaos-operator:1.13.2],SizeBytes:57450276,},ContainerImage{Names:[docker.io/litmuschaos/chaos-runner@sha256:806f80ccc41d7d5b33035d09bfc41bb7814f9989e738fcdefc29780934d4a663 docker.io/litmuschaos/chaos-runner:1.13.2],SizeBytes:56004602,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210326-1e038dc5],SizeBytes:53960776,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:0f30e5c1a1286a4bf6739dd8bdf1d00f0dd915474b3c62e892592277b0395986 docker.io/bitnami/kubectl:latest],SizeBytes:49444404,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.7],SizeBytes:48502094,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a k8s.gcr.io/e2e-test-images/agnhost:2.21],SizeBytes:46251468,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[quay.io/metallb/speaker@sha256:c150c6a26a29de43097918f08551bbd2a80de229225866a3814594798089e51c quay.io/metallb/speaker:main],SizeBytes:39322460,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[quay.io/metallb/controller@sha256:68c52b5301b42cad0cbf497f3d83c2e18b82548a9c36690b99b2023c55cb715a quay.io/metallb/controller:main],SizeBytes:35989620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:21086532,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:17747507,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:13982350,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:13367922,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e docker.io/projectcontour/contour:v1.15.1],SizeBytes:11888781,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:8888823,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 26 22:46:08.016: INFO: Logging kubelet events for node leguer-worker2 May 26 22:46:08.022: INFO: Logging pods the kubelet thinks is on node leguer-worker2 May 26 22:46:08.048: INFO: kindnet-kx9mk started at 2021-05-22 08:23:37 +0000 UTC (0+1 container statuses recorded) May 26 22:46:08.048: INFO: Container kindnet-cni ready: true, restart count 23 May 26 22:46:08.048: INFO: tune-sysctls-vjdll started at 2021-05-22 08:23:44 +0000 UTC (0+1 container statuses recorded) May 26 22:46:08.048: INFO: Container setsysctls ready: true, restart count 0 May 26 22:46:08.048: INFO: chaos-operator-ce-5754fd4b69-zcrd4 started at 2021-05-26 09:12:47 +0000 UTC (0+1 container statuses recorded) May 26 22:46:08.048: INFO: Container chaos-operator ready: true, restart count 0 May 26 22:46:08.048: INFO: chaos-controller-manager-69c479c674-ld4jc started at 2021-05-26 09:15:28 +0000 UTC (0+1 container statuses recorded) May 26 22:46:08.048: INFO: Container chaos-mesh ready: true, restart count 0 May 26 22:46:08.048: INFO: speaker-55zcr started at 2021-05-22 08:23:57 +0000 UTC (0+1 container statuses recorded) May 26 22:46:08.048: INFO: Container speaker ready: true, restart count 0 May 26 22:46:08.048: INFO: kube-proxy-mp68m started at 2021-05-22 08:23:37 +0000 UTC (0+1 container statuses recorded) May 26 22:46:08.048: INFO: Container kube-proxy ready: true, restart count 0 May 26 22:46:08.048: INFO: kube-multus-ds-n48bs started at 2021-05-22 08:23:44 +0000 UTC (0+1 container statuses recorded) May 26 22:46:08.048: INFO: Container kube-multus ready: true, restart count 1 May 26 22:46:08.048: INFO: contour-6648989f79-8gz4z started at 2021-05-22 10:05:00 +0000 UTC (0+1 container statuses recorded) May 26 22:46:08.048: INFO: Container contour ready: true, restart count 0 May 26 22:46:08.048: INFO: controller-675995489c-h2wms started at 2021-05-22 08:23:59 +0000 UTC (0+1 container statuses recorded) May 26 22:46:08.048: INFO: Container controller ready: true, restart count 0 May 26 22:46:08.048: INFO: dockerd started at 2021-05-26 09:12:20 +0000 UTC (0+1 container statuses recorded) May 26 22:46:08.048: INFO: Container dockerd ready: true, restart count 0 May 26 22:46:08.048: INFO: chaos-daemon-2tzpz started at 2021-05-26 09:15:28 +0000 UTC (0+1 container statuses recorded) May 26 22:46:08.048: INFO: Container chaos-daemon ready: true, restart count 0 May 26 22:46:08.048: INFO: create-loop-devs-nbf25 started at 2021-05-22 08:23:43 +0000 UTC (0+1 container statuses recorded) May 26 22:46:08.048: INFO: Container loopdev ready: true, restart count 0 May 26 22:46:08.048: INFO: contour-6648989f79-2vldk started at 2021-05-22 08:24:02 +0000 UTC (0+1 container statuses recorded) May 26 22:46:08.048: INFO: Container contour ready: true, restart count 0 W0526 22:46:08.056184 17 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 26 22:46:08.269: INFO: Latency metrics for node leguer-worker2 May 26 22:46:08.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3669" for this suite. • Failure [609.267 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 26 22:46:06.516: Frontend service did not start serving content in 600 seconds. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:378 ------------------------------ {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":1,"completed":0,"skipped":5067,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 26 22:46:08.286: INFO: Running AfterSuite actions on all nodes May 26 22:46:08.286: INFO: Running AfterSuite actions on node 1 May 26 22:46:08.286: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_smoke/junit_01.xml {"msg":"Test Suite completed","total":1,"completed":0,"skipped":5667,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} Summarizing 1 Failure: [Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:378 Ran 1 of 5668 Specs in 609.427 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 5667 Skipped --- FAIL: TestE2E (609.47s) FAIL Ginkgo ran 1 suite in 10m10.89971947s Test Suite Failed