I0104 12:30:22.815207 8 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0104 12:30:22.816166 8 e2e.go:109] Starting e2e run "aea228b2-8aa0-493a-b474-40b3e911391d" on Ginkgo node 1 {"msg":"Test Suite starting","total":1,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578141021 - Will randomize all specs Will run 1 of 4841 specs Jan 4 12:30:22.836: INFO: >>> kubeConfig: /root/.kube/config Jan 4 12:30:22.838: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 4 12:30:22.868: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 4 12:30:22.946: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 4 12:30:22.946: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 4 12:30:22.946: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 4 12:30:22.954: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 4 12:30:22.954: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 4 12:30:22.954: INFO: e2e test version: v1.18.0-alpha.1.106+4f70231ce7736c Jan 4 12:30:22.955: INFO: kube-apiserver version: v1.17.0 Jan 4 12:30:22.955: INFO: >>> kubeConfig: /root/.kube/config Jan 4 12:30:22.959: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 4 12:30:22.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Jan 4 12:30:23.053: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating all guestbook components Jan 4 12:30:23.055: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jan 4 12:30:23.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6215' Jan 4 12:30:25.338: INFO: stderr: "" Jan 4 12:30:25.338: INFO: stdout: "service/agnhost-slave created\n" Jan 4 12:30:25.339: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jan 4 12:30:25.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6215' Jan 4 12:30:25.738: INFO: stderr: "" Jan 4 12:30:25.739: INFO: stdout: "service/agnhost-master created\n" Jan 4 12:30:25.739: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 4 12:30:25.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6215' Jan 4 12:30:26.064: INFO: stderr: "" Jan 4 12:30:26.064: INFO: stdout: "service/frontend created\n" Jan 4 12:30:26.064: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 4 12:30:26.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6215' Jan 4 12:30:26.401: INFO: stderr: "" Jan 4 12:30:26.401: INFO: stdout: "deployment.apps/frontend created\n" Jan 4 12:30:26.401: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 4 12:30:26.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6215' Jan 4 12:30:26.819: INFO: stderr: "" Jan 4 12:30:26.820: INFO: stdout: "deployment.apps/agnhost-master created\n" Jan 4 12:30:26.820: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 4 12:30:26.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6215' Jan 4 12:30:28.205: INFO: stderr: "" Jan 4 12:30:28.205: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jan 4 12:30:28.205: INFO: Waiting for all frontend pods to be Running. Jan 4 12:30:53.258: INFO: Waiting for frontend to serve content. Jan 4 12:30:53.283: INFO: Trying to add a new entry to the guestbook. Jan 4 12:30:53.310: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:30:58.334: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:31:03.376: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:31:08.406: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:31:13.441: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:31:18.479: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:31:23.514: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:31:28.557: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:31:33.590: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:31:38.627: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:31:43.704: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:31:48.732: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:31:53.767: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:31:58.816: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:32:03.867: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:32:08.915: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:32:13.973: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:32:19.004: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:32:24.115: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:32:29.316: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:32:34.329: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:32:39.356: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:32:44.407: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:32:49.447: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:32:54.486: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:32:59.518: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:33:04.553: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:33:09.575: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:33:14.639: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:33:20.107: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:33:25.127: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:33:30.152: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:33:35.183: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:33:40.209: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:33:45.240: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:33:50.442: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused Jan 4 12:33:55.443: FAIL: Cannot added new entry in 180 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.validateGuestbookApp(0x54725c0, 0xc0026ac000, 0xc001f7fdc0, 0xc) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2338 +0x551 k8s.io/kubernetes/test/e2e/kubectl.glob..func2.7.2() /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:419 +0x165 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002467f00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a k8s.io/kubernetes/test/e2e.TestE2E(0xc002467f00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b testing.tRunner(0xc002467f00, 0x4c73e48) /usr/local/go/src/testing/testing.go:909 +0xc9 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:960 +0x350 STEP: using delete to clean up resources Jan 4 12:33:55.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6215' Jan 4 12:33:55.786: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:33:55.787: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jan 4 12:33:55.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6215' Jan 4 12:33:56.041: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:33:56.041: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jan 4 12:33:56.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6215' Jan 4 12:33:56.242: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:33:56.242: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 4 12:33:56.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6215' Jan 4 12:33:56.390: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:33:56.390: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 4 12:33:56.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6215' Jan 4 12:33:56.589: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:33:56.590: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jan 4 12:33:56.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6215' Jan 4 12:33:56.810: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 12:33:56.810: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "kubectl-6215". STEP: Found 35 events. Jan 4 12:33:56.952: INFO: At 2020-01-04 12:30:26 +0000 UTC - event for agnhost-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-master-74c46fb7d4 to 1 Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:26 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6c5f89d5d4 to 3 Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:26 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-zpl5l Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:26 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-v9n6g Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:26 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-sg5zg Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:26 +0000 UTC - event for frontend-6c5f89d5d4-sg5zg: {default-scheduler } Scheduled: Successfully assigned kubectl-6215/frontend-6c5f89d5d4-sg5zg to jerma-node Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:26 +0000 UTC - event for frontend-6c5f89d5d4-v9n6g: {default-scheduler } Scheduled: Successfully assigned kubectl-6215/frontend-6c5f89d5d4-v9n6g to jerma-server-mvvl6gufaqub Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:26 +0000 UTC - event for frontend-6c5f89d5d4-zpl5l: {default-scheduler } Scheduled: Successfully assigned kubectl-6215/frontend-6c5f89d5d4-zpl5l to jerma-node Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:27 +0000 UTC - event for agnhost-master-74c46fb7d4: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-master-74c46fb7d4-mjxrh Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:27 +0000 UTC - event for agnhost-master-74c46fb7d4-mjxrh: {default-scheduler } Scheduled: Successfully assigned kubectl-6215/agnhost-master-74c46fb7d4-mjxrh to jerma-node Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:28 +0000 UTC - event for agnhost-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-slave-774cfc759f to 2 Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:28 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-bzdqb Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:28 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-5v9vf Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:28 +0000 UTC - event for agnhost-slave-774cfc759f-5v9vf: {default-scheduler } Scheduled: Successfully assigned kubectl-6215/agnhost-slave-774cfc759f-5v9vf to jerma-server-mvvl6gufaqub Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:28 +0000 UTC - event for agnhost-slave-774cfc759f-bzdqb: {default-scheduler } Scheduled: Successfully assigned kubectl-6215/agnhost-slave-774cfc759f-bzdqb to jerma-node Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:32 +0000 UTC - event for frontend-6c5f89d5d4-v9n6g: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:35 +0000 UTC - event for agnhost-master-74c46fb7d4-mjxrh: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:35 +0000 UTC - event for agnhost-slave-774cfc759f-5v9vf: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:35 +0000 UTC - event for frontend-6c5f89d5d4-zpl5l: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:36 +0000 UTC - event for frontend-6c5f89d5d4-v9n6g: {kubelet jerma-server-mvvl6gufaqub} Created: Created container guestbook-frontend Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:37 +0000 UTC - event for frontend-6c5f89d5d4-sg5zg: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:39 +0000 UTC - event for agnhost-slave-774cfc759f-5v9vf: {kubelet jerma-server-mvvl6gufaqub} Created: Created container slave Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:39 +0000 UTC - event for frontend-6c5f89d5d4-v9n6g: {kubelet jerma-server-mvvl6gufaqub} Started: Started container guestbook-frontend Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:40 +0000 UTC - event for agnhost-slave-774cfc759f-5v9vf: {kubelet jerma-server-mvvl6gufaqub} Started: Started container slave Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:41 +0000 UTC - event for agnhost-slave-774cfc759f-bzdqb: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:44 +0000 UTC - event for agnhost-master-74c46fb7d4-mjxrh: {kubelet jerma-node} Created: Created container master Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:44 +0000 UTC - event for frontend-6c5f89d5d4-zpl5l: {kubelet jerma-node} Created: Created container guestbook-frontend Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:47 +0000 UTC - event for agnhost-slave-774cfc759f-bzdqb: {kubelet jerma-node} Created: Created container slave Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:47 +0000 UTC - event for frontend-6c5f89d5d4-sg5zg: {kubelet jerma-node} Created: Created container guestbook-frontend Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:48 +0000 UTC - event for agnhost-master-74c46fb7d4-mjxrh: {kubelet jerma-node} Started: Started container master Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:48 +0000 UTC - event for agnhost-slave-774cfc759f-bzdqb: {kubelet jerma-node} Started: Started container slave Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:48 +0000 UTC - event for frontend-6c5f89d5d4-sg5zg: {kubelet jerma-node} Started: Started container guestbook-frontend Jan 4 12:33:56.953: INFO: At 2020-01-04 12:30:48 +0000 UTC - event for frontend-6c5f89d5d4-zpl5l: {kubelet jerma-node} Started: Started container guestbook-frontend Jan 4 12:33:56.953: INFO: At 2020-01-04 12:33:56 +0000 UTC - event for agnhost-master-74c46fb7d4-mjxrh: {kubelet jerma-node} Killing: Stopping container master Jan 4 12:33:56.953: INFO: At 2020-01-04 12:33:56 +0000 UTC - event for frontend-6c5f89d5d4-v9n6g: {kubelet jerma-server-mvvl6gufaqub} Killing: Stopping container guestbook-frontend Jan 4 12:33:56.999: INFO: POD NODE PHASE GRACE CONDITIONS Jan 4 12:33:56.999: INFO: agnhost-master-74c46fb7d4-mjxrh jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:27 +0000 UTC }] Jan 4 12:33:57.000: INFO: agnhost-slave-774cfc759f-5v9vf jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:28 +0000 UTC }] Jan 4 12:33:57.000: INFO: agnhost-slave-774cfc759f-bzdqb jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:28 +0000 UTC }] Jan 4 12:33:57.000: INFO: frontend-6c5f89d5d4-sg5zg jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:26 +0000 UTC }] Jan 4 12:33:57.000: INFO: frontend-6c5f89d5d4-v9n6g jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:26 +0000 UTC }] Jan 4 12:33:57.000: INFO: frontend-6c5f89d5d4-zpl5l jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 12:30:26 +0000 UTC }] Jan 4 12:33:57.000: INFO: Jan 4 12:33:57.838: INFO: Logging node info for node jerma-node Jan 4 12:33:58.209: INFO: Node Info: &Node{ObjectMeta:{jerma-node /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 7672 0 2020-01-04 11:59:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-04 12:32:30 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-04 12:32:30 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-04 12:32:30 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-04 12:32:30 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 4 12:33:58.209: INFO: Logging kubelet events for node jerma-node Jan 4 12:33:58.217: INFO: Logging pods the kubelet thinks is on node jerma-node Jan 4 12:33:58.253: INFO: agnhost-slave-774cfc759f-bzdqb started at 2020-01-04 12:30:28 +0000 UTC (0+1 container statuses recorded) Jan 4 12:33:58.253: INFO: Container slave ready: true, restart count 0 Jan 4 12:33:58.253: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded) Jan 4 12:33:58.253: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 12:33:58.253: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded) Jan 4 12:33:58.253: INFO: Container weave ready: true, restart count 1 Jan 4 12:33:58.253: INFO: Container weave-npc ready: true, restart count 0 Jan 4 12:33:58.253: INFO: frontend-6c5f89d5d4-sg5zg started at 2020-01-04 12:30:26 +0000 UTC (0+1 container statuses recorded) Jan 4 12:33:58.253: INFO: Container guestbook-frontend ready: true, restart count 0 Jan 4 12:33:58.253: INFO: frontend-6c5f89d5d4-zpl5l started at 2020-01-04 12:30:26 +0000 UTC (0+1 container statuses recorded) Jan 4 12:33:58.253: INFO: Container guestbook-frontend ready: true, restart count 0 Jan 4 12:33:58.253: INFO: agnhost-master-74c46fb7d4-mjxrh started at 2020-01-04 12:30:28 +0000 UTC (0+1 container statuses recorded) Jan 4 12:33:58.253: INFO: Container master ready: true, restart count 0 W0104 12:33:58.532918 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 12:33:58.780: INFO: Latency metrics for node jerma-node Jan 4 12:33:58.781: INFO: Logging node info for node jerma-server-mvvl6gufaqub Jan 4 12:33:58.787: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 7665 0 2020-01-04 11:47:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-04 12:32:29 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-04 12:32:29 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-04 12:32:29 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-04 12:32:29 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 4 12:33:58.788: INFO: Logging kubelet events for node jerma-server-mvvl6gufaqub Jan 4 12:33:58.796: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub Jan 4 12:33:58.820: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded) Jan 4 12:33:58.821: INFO: Container kube-scheduler ready: true, restart count 1 Jan 4 12:33:58.821: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded) Jan 4 12:33:58.821: INFO: Container coredns ready: true, restart count 0 Jan 4 12:33:58.821: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded) Jan 4 12:33:58.821: INFO: Container coredns ready: true, restart count 0 Jan 4 12:33:58.821: INFO: frontend-6c5f89d5d4-v9n6g started at 2020-01-04 12:30:26 +0000 UTC (0+1 container statuses recorded) Jan 4 12:33:58.821: INFO: Container guestbook-frontend ready: true, restart count 0 Jan 4 12:33:58.821: INFO: agnhost-slave-774cfc759f-5v9vf started at 2020-01-04 12:30:28 +0000 UTC (0+1 container statuses recorded) Jan 4 12:33:58.821: INFO: Container slave ready: true, restart count 0 Jan 4 12:33:58.821: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded) Jan 4 12:33:58.821: INFO: Container kube-apiserver ready: true, restart count 1 Jan 4 12:33:58.821: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded) Jan 4 12:33:58.821: INFO: Container kube-controller-manager ready: true, restart count 1 Jan 4 12:33:58.821: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded) Jan 4 12:33:58.821: INFO: Container etcd ready: true, restart count 1 Jan 4 12:33:58.821: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded) Jan 4 12:33:58.821: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 12:33:58.821: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded) Jan 4 12:33:58.821: INFO: Container weave ready: true, restart count 0 Jan 4 12:33:58.821: INFO: Container weave-npc ready: true, restart count 0 W0104 12:33:58.848728 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 12:33:58.894: INFO: Latency metrics for node jerma-server-mvvl6gufaqub Jan 4 12:33:58.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6215" for this suite. • Failure [215.930 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:387 should create and stop a working application [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 4 12:33:55.443: Cannot added new entry in 180 seconds. /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2338 ------------------------------ {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":1,"completed":0,"skipped":2223,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJan 4 12:33:58.955: INFO: Running AfterSuite actions on all nodes Jan 4 12:33:58.955: INFO: Running AfterSuite actions on node 1 Jan 4 12:33:58.955: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_smoke/junit_01.xml {"msg":"Test Suite completed","total":1,"completed":0,"skipped":4840,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} Summarizing 1 Failure: [Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2338 Ran 1 of 4841 Specs in 216.125 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 4840 Skipped --- FAIL: TestE2E (216.19s) FAIL