I0104 12:06:53.737431 8 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0104 12:06:53.738119 8 e2e.go:109] Starting e2e run "8122af43-3d41-4ff7-86ac-03032253e536" on Ginkgo node 1 {"msg":"Test Suite starting","total":1,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578139612 - Will randomize all specs Will run 1 of 4841 specs Jan 4 12:06:53.864: INFO: >>> kubeConfig: /root/.kube/config Jan 4 12:06:53.873: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 4 12:06:53.899: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 4 12:06:53.943: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 4 12:06:53.943: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 4 12:06:53.943: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 4 12:06:53.956: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 4 12:06:53.956: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 4 12:06:53.956: INFO: e2e test version: v1.18.0-alpha.1.106+4f70231ce7736c Jan 4 12:06:53.958: INFO: kube-apiserver version: v1.17.0 Jan 4 12:06:53.958: INFO: >>> kubeConfig: /root/.kube/config Jan 4 12:06:53.965: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 4 12:06:53.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Jan 4 12:06:54.166: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating all guestbook components Jan 4 12:06:54.168: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jan 4 12:06:54.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6142' Jan 4 12:06:56.466: INFO: stderr: "" Jan 4 12:06:56.467: INFO: stdout: "service/agnhost-slave created\n" Jan 4 12:06:56.468: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jan 4 12:06:56.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6142' Jan 4 12:06:57.050: INFO: stderr: "" Jan 4 12:06:57.050: INFO: stdout: "service/agnhost-master created\n" Jan 4 12:06:57.051: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 4 12:06:57.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6142' Jan 4 12:06:57.419: INFO: stderr: "" Jan 4 12:06:57.419: INFO: stdout: "service/frontend created\n" Jan 4 12:06:57.420: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 4 12:06:57.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6142' Jan 4 12:06:57.729: INFO: stderr: "" Jan 4 12:06:57.729: INFO: stdout: "deployment.apps/frontend created\n" Jan 4 12:06:57.729: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 4 12:06:57.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6142' Jan 4 12:06:58.155: INFO: stderr: "" Jan 4 12:06:58.155: INFO: stdout: "deployment.apps/agnhost-master created\n" Jan 4 12:06:58.155: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 4 12:06:58.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6142' Jan 4 12:06:59.203: INFO: stderr: "" Jan 4 12:06:59.203: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jan 4 12:06:59.203: INFO: Waiting for all frontend pods to be Running. Jan 4 12:07:24.256: INFO: Waiting for frontend to serve content. Jan 4 12:07:24.279: INFO: Trying to add a new entry to the guestbook. Jan 4 12:07:24.299: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:07:29.323: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:07:34.364: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:07:39.415: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:07:44.468: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:07:49.495: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:07:54.524: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:07:59.555: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:08:04.611: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:08:09.635: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:08:14.677: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:08:19.714: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:08:24.781: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:08:29.825: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:08:34.891: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:08:39.917: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:08:44.953: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:08:49.977: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:08:55.203: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:09:00.222: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:09:05.247: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:09:10.271: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:09:15.352: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:09:20.379: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:09:25.403: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:09:30.429: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:09:35.466: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:09:40.491: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:09:45.522: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:09:50.562: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:09:55.589: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:10:00.631: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:10:05.706: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:10:10.727: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:10:15.755: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:10:20.811: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused Jan 4 12:10:25.813: FAIL: Cannot added new entry in 180 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.validateGuestbookApp(0x54725c0, 0xc0024a7080, 0xc0027a2010, 0xc) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2338 +0x551 k8s.io/kubernetes/test/e2e/kubectl.glob..func2.7.2() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:419 +0x165 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0029b9200) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a k8s.io/kubernetes/test/e2e.TestE2E(0xc0029b9200) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b testing.tRunner(0xc0029b9200, 0x4c73e48) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 STEP: using delete to clean up resources Jan 4 12:10:25.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6142' Jan 4 12:10:26.193: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:10:26.193: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jan 4 12:10:26.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6142' Jan 4 12:10:26.417: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:10:26.417: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jan 4 12:10:26.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6142' Jan 4 12:10:26.575: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:10:26.575: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 4 12:10:26.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6142' Jan 4 12:10:26.663: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:10:26.663: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 4 12:10:26.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6142' Jan 4 12:10:26.745: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:10:26.745: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jan 4 12:10:26.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6142' Jan 4 12:10:26.835: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:10:26.835: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "kubectl-6142". STEP: Found 33 events. Jan 4 12:10:26.839: INFO: At 2020-01-04 12:06:57 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6c5f89d5d4 to 3 Jan 4 12:10:26.839: INFO: At 2020-01-04 12:06:57 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-xd4mp Jan 4 12:10:26.839: INFO: At 2020-01-04 12:06:57 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-6kc42 Jan 4 12:10:26.839: INFO: At 2020-01-04 12:06:57 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-wzrmh Jan 4 12:10:26.839: INFO: At 2020-01-04 12:06:57 +0000 UTC - event for frontend-6c5f89d5d4-wzrmh: {default-scheduler } Scheduled: Successfully assigned kubectl-6142/frontend-6c5f89d5d4-wzrmh to jerma-node Jan 4 12:10:26.839: INFO: At 2020-01-04 12:06:58 +0000 UTC - event for agnhost-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-master-74c46fb7d4 to 1 Jan 4 12:10:26.839: INFO: At 2020-01-04 12:06:58 +0000 UTC - event for agnhost-master-74c46fb7d4: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-master-74c46fb7d4-4txkd Jan 4 12:10:26.839: INFO: At 2020-01-04 12:06:58 +0000 UTC - event for agnhost-master-74c46fb7d4-4txkd: {default-scheduler } Scheduled: Successfully assigned kubectl-6142/agnhost-master-74c46fb7d4-4txkd to jerma-node Jan 4 12:10:26.839: INFO: At 2020-01-04 12:06:58 +0000 UTC - event for frontend-6c5f89d5d4-6kc42: {default-scheduler } Scheduled: Successfully assigned kubectl-6142/frontend-6c5f89d5d4-6kc42 to jerma-server-mvvl6gufaqub Jan 4 12:10:26.839: INFO: At 2020-01-04 12:06:58 +0000 UTC - event for frontend-6c5f89d5d4-xd4mp: {default-scheduler } Scheduled: Successfully assigned kubectl-6142/frontend-6c5f89d5d4-xd4mp to jerma-node Jan 4 12:10:26.839: INFO: At 2020-01-04 12:06:59 +0000 UTC - event for agnhost-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-slave-774cfc759f to 2 Jan 4 12:10:26.839: INFO: At 2020-01-04 12:06:59 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-cdrq8 Jan 4 12:10:26.839: INFO: At 2020-01-04 12:06:59 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-wgjq9 Jan 4 12:10:26.839: INFO: At 2020-01-04 12:06:59 +0000 UTC - event for agnhost-slave-774cfc759f-cdrq8: {default-scheduler } Scheduled: Successfully assigned kubectl-6142/agnhost-slave-774cfc759f-cdrq8 to jerma-node Jan 4 12:10:26.839: INFO: At 2020-01-04 12:06:59 +0000 UTC - event for agnhost-slave-774cfc759f-wgjq9: {default-scheduler } Scheduled: Successfully assigned kubectl-6142/agnhost-slave-774cfc759f-wgjq9 to jerma-server-mvvl6gufaqub Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:04 +0000 UTC - event for frontend-6c5f89d5d4-6kc42: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:07 +0000 UTC - event for agnhost-master-74c46fb7d4-4txkd: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:07 +0000 UTC - event for agnhost-slave-774cfc759f-wgjq9: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:10 +0000 UTC - event for frontend-6c5f89d5d4-wzrmh: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:11 +0000 UTC - event for frontend-6c5f89d5d4-6kc42: {kubelet jerma-server-mvvl6gufaqub} Created: Created container guestbook-frontend Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:12 +0000 UTC - event for agnhost-slave-774cfc759f-cdrq8: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:12 +0000 UTC - event for agnhost-slave-774cfc759f-wgjq9: {kubelet jerma-server-mvvl6gufaqub} Started: Started container slave Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:12 +0000 UTC - event for agnhost-slave-774cfc759f-wgjq9: {kubelet jerma-server-mvvl6gufaqub} Created: Created container slave Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:12 +0000 UTC - event for frontend-6c5f89d5d4-6kc42: {kubelet jerma-server-mvvl6gufaqub} Started: Started container guestbook-frontend Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:14 +0000 UTC - event for frontend-6c5f89d5d4-xd4mp: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:16 +0000 UTC - event for agnhost-master-74c46fb7d4-4txkd: {kubelet jerma-node} Created: Created container master Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:16 +0000 UTC - event for frontend-6c5f89d5d4-wzrmh: {kubelet jerma-node} Created: Created container guestbook-frontend Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:18 +0000 UTC - event for agnhost-slave-774cfc759f-cdrq8: {kubelet jerma-node} Created: Created container slave Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:18 +0000 UTC - event for frontend-6c5f89d5d4-xd4mp: {kubelet jerma-node} Created: Created container guestbook-frontend Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:19 +0000 UTC - event for agnhost-master-74c46fb7d4-4txkd: {kubelet jerma-node} Started: Started container master Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:19 +0000 UTC - event for agnhost-slave-774cfc759f-cdrq8: {kubelet jerma-node} Started: Started container slave Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:19 +0000 UTC - event for frontend-6c5f89d5d4-wzrmh: {kubelet jerma-node} Started: Started container guestbook-frontend Jan 4 12:10:26.839: INFO: At 2020-01-04 12:07:19 +0000 UTC - event for frontend-6c5f89d5d4-xd4mp: {kubelet jerma-node} Started: Started container guestbook-frontend Jan 4 12:10:26.842: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 12:10:26.842: INFO: agnhost-master-74c46fb7d4-4txkd jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:06:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:07:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:07:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:06:58 +0000 UTC }] Jan 4 12:10:26.842: INFO: agnhost-slave-774cfc759f-cdrq8 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:07:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:07:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:07:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:06:59 +0000 UTC }] Jan 4 12:10:26.842: INFO: agnhost-slave-774cfc759f-wgjq9 jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:07:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:07:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:07:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:06:59 +0000 UTC }] Jan 4 12:10:26.842: INFO: frontend-6c5f89d5d4-6kc42 jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:06:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:07:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:07:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:06:58 +0000 UTC }] Jan 4 12:10:26.842: INFO: frontend-6c5f89d5d4-wzrmh jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:06:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:07:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:07:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:06:57 +0000 UTC }] Jan 4 12:10:26.842: INFO: frontend-6c5f89d5d4-xd4mp jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:06:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:07:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:07:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:06:57 +0000 UTC }] Jan 4 12:10:26.842: INFO: Jan 4 12:10:26.844: INFO: Logging node info for node jerma-node Jan 4 12:10:26.846: INFO: Node Info: &Node{ObjectMeta:{jerma-node /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 3338 0 2020-01-04 11:59:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-04 12:07:24 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-04 12:07:24 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-04 12:07:24 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-04 12:07:24 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 4 12:10:26.846: INFO: Logging kubelet events for node jerma-node Jan 4 12:10:26.849: INFO: Logging pods the kubelet thinks is on node jerma-node Jan 4 12:10:26.882: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded) Jan 4 12:10:26.882: INFO: Container weave ready: true, restart count 1 Jan 4 12:10:26.882: INFO: Container weave-npc ready: true, restart count 0 Jan 4 12:10:26.882: INFO: frontend-6c5f89d5d4-wzrmh started at 2020-01-04 12:06:58 +0000 UTC (0+1 container statuses recorded) Jan 4 12:10:26.882: INFO: Container guestbook-frontend ready: true, restart count 0 Jan 4 12:10:26.882: INFO: frontend-6c5f89d5d4-xd4mp started at 2020-01-04 12:06:58 +0000 UTC (0+1 container statuses recorded) Jan 4 12:10:26.882: INFO: Container guestbook-frontend ready: true, restart count 0 Jan 4 12:10:26.882: INFO: agnhost-master-74c46fb7d4-4txkd started at 2020-01-04 12:06:59 +0000 UTC (0+1 container statuses recorded) Jan 4 12:10:26.882: INFO: Container master ready: true, restart count 0 Jan 4 12:10:26.882: INFO: agnhost-slave-774cfc759f-cdrq8 started at 2020-01-04 12:07:00 +0000 UTC (0+1 container statuses recorded) Jan 4 12:10:26.882: INFO: Container slave ready: true, restart count 0 Jan 4 12:10:26.882: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded) Jan 4 12:10:26.882: INFO: Container kube-proxy ready: true, restart count 0 W0104 12:10:26.885977 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 12:10:26.967: INFO: Latency metrics for node jerma-node Jan 4 12:10:26.967: INFO: Logging node info for node jerma-server-mvvl6gufaqub Jan 4 12:10:26.970: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 3347 0 2020-01-04 11:47:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-04 12:07:27 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-04 12:07:27 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-04 12:07:27 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-04 12:07:27 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 4 12:10:26.971: INFO: Logging kubelet events for node jerma-server-mvvl6gufaqub Jan 4 12:10:28.155: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub Jan 4 12:10:28.547: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded) Jan 4 12:10:28.547: INFO: Container coredns ready: true, restart count 0 Jan 4 12:10:28.547: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded) Jan 4 12:10:28.547: INFO: Container coredns ready: true, restart count 0 Jan 4 12:10:28.547: INFO: frontend-6c5f89d5d4-6kc42 started at 2020-01-04 12:06:58 +0000 UTC (0+1 container statuses recorded) Jan 4 12:10:28.547: INFO: Container guestbook-frontend ready: true, restart count 0 Jan 4 12:10:28.547: INFO: agnhost-slave-774cfc759f-wgjq9 started at 2020-01-04 12:07:00 +0000 UTC (0+1 container statuses recorded) Jan 4 12:10:28.547: INFO: Container slave ready: true, restart count 0 Jan 4 12:10:28.547: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded) Jan 4 12:10:28.547: INFO: Container kube-scheduler ready: true, restart count 1 Jan 4 12:10:28.547: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded) Jan 4 12:10:28.547: INFO: Container kube-controller-manager ready: true, restart count 1 Jan 4 12:10:28.547: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded) Jan 4 12:10:28.547: INFO: Container etcd ready: true, restart count 1 Jan 4 12:10:28.547: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded) Jan 4 12:10:28.547: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 12:10:28.547: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded) Jan 4 12:10:28.547: INFO: Container weave ready: true, restart count 0 Jan 4 12:10:28.547: INFO: Container weave-npc ready: true, restart count 0 Jan 4 12:10:28.547: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded) Jan 4 12:10:28.547: INFO: Container kube-apiserver ready: true, restart count 1 W0104 12:10:28.599088 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 12:10:29.400: INFO: Latency metrics for node jerma-server-mvvl6gufaqub Jan 4 12:10:29.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6142" for this suite. • Failure [215.951 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:387 should create and stop a working application [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 4 12:10:25.813: Cannot added new entry in 180 seconds. /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2338 ------------------------------ {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":1,"completed":0,"skipped":730,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJan 4 12:10:30.023: INFO: Running AfterSuite actions on all nodes Jan 4 12:10:30.023: INFO: Running AfterSuite actions on node 1 Jan 4 12:10:30.023: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_smoke/junit_01.xml {"msg":"Test Suite completed","total":1,"completed":0,"skipped":4840,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} Summarizing 1 Failure: [Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2338 Ran 1 of 4841 Specs in 216.177 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 4840 Skipped --- FAIL: TestE2E (216.33s) FAIL