/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 17 14:46:53.355: Cannot added new entry in 180 seconds.
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 14:43:12.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
Dec 17 14:43:12.595: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create and stop a working application [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Dec 17 14:43:12.599: INFO: apiVersion: v1
kind: Service
metadata:
name: agnhost-slave
labels:
app: agnhost
role: slave
tier: backend
spec:
ports:
- port: 6379
selector:
app: agnhost
role: slave
tier: backend
Dec 17 14:43:12.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2669'
Dec 17 14:43:14.828: INFO: stderr: ""
Dec 17 14:43:14.828: INFO: stdout: "service/agnhost-slave created\n"
Dec 17 14:43:14.829: INFO: apiVersion: v1
kind: Service
metadata:
name: agnhost-master
labels:
app: agnhost
role: master
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: agnhost
role: master
tier: backend
Dec 17 14:43:14.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2669'
Dec 17 14:43:15.322: INFO: stderr: ""
Dec 17 14:43:15.322: INFO: stdout: "service/agnhost-master created\n"
Dec 17 14:43:15.322: INFO: apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
ports:
- port: 80
selector:
app: guestbook
tier: frontend
Dec 17 14:43:15.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2669'
Dec 17 14:43:15.682: INFO: stderr: ""
Dec 17 14:43:15.682: INFO: stdout: "service/frontend created\n"
Dec 17 14:43:15.683: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 3
selector:
matchLabels:
app: guestbook
tier: frontend
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: guestbook-frontend
image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
args: [ "guestbook", "--backend-port", "6379" ]
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
Dec 17 14:43:15.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2669'
Dec 17 14:43:16.002: INFO: stderr: ""
Dec 17 14:43:16.002: INFO: stdout: "deployment.apps/frontend created\n"
Dec 17 14:43:16.002: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
name: agnhost-master
spec:
replicas: 1
selector:
matchLabels:
app: agnhost
role: master
tier: backend
template:
metadata:
labels:
app: agnhost
role: master
tier: backend
spec:
containers:
- name: master
image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
args: [ "guestbook", "--http-port", "6379" ]
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
Dec 17 14:43:16.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2669'
Dec 17 14:43:16.385: INFO: stderr: ""
Dec 17 14:43:16.385: INFO: stdout: "deployment.apps/agnhost-master created\n"
Dec 17 14:43:16.386: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
name: agnhost-slave
spec:
replicas: 2
selector:
matchLabels:
app: agnhost
role: slave
tier: backend
template:
metadata:
labels:
app: agnhost
role: slave
tier: backend
spec:
containers:
- name: slave
image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
Dec 17 14:43:16.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2669'
Dec 17 14:43:17.257: INFO: stderr: ""
Dec 17 14:43:17.257: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Dec 17 14:43:17.257: INFO: Waiting for all frontend pods to be Running.
Dec 17 14:43:52.309: INFO: Waiting for frontend to serve content.
Dec 17 14:43:52.320: INFO: Trying to add a new entry to the guestbook.
Dec 17 14:43:52.335: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response:
Dec 17 14:43:57.371: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:44:02.398: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:44:07.426: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:44:12.447: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:44:17.489: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:44:22.507: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:44:27.540: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:44:32.557: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:44:37.577: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:44:42.631: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:44:47.659: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:44:52.679: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:44:57.701: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:45:02.735: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:45:07.758: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:45:12.850: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:45:17.871: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:45:22.926: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:45:27.943: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:45:32.961: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:45:38.026: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:45:43.037: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:45:48.055: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:45:53.077: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:45:58.098: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:46:03.121: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:46:08.146: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:46:13.171: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:46:18.192: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:46:23.222: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:46:28.239: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:46:33.263: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:46:38.293: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:46:43.312: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:46:48.354: INFO: Failed to get response from guestbook. err: <nil>, response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused
Dec 17 14:46:53.355: FAIL: Cannot added new entry in 180 seconds.
Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.validateGuestbookApp(0x5424e60, 0xc000d95a20, 0xc0020bd6e0, 0xc)
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315 +0x551
k8s.io/kubernetes/test/e2e/kubectl.glob..func2.7.2()
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:417 +0x165
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002798700)
_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a
k8s.io/kubernetes/test/e2e.TestE2E(0xc002798700)
_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b
testing.tRunner(0xc002798700, 0x4c30de8)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
STEP: using delete to clean up resources
Dec 17 14:46:53.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2669'
Dec 17 14:46:53.650: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 14:46:53.650: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 17 14:46:53.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2669'
Dec 17 14:46:53.824: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 14:46:53.824: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 17 14:46:53.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2669'
Dec 17 14:46:54.078: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 14:46:54.078: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 17 14:46:54.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2669'
Dec 17 14:46:54.347: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 14:46:54.347: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 17 14:46:54.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2669'
Dec 17 14:46:54.436: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 14:46:54.436: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 17 14:46:54.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2669'
Dec 17 14:46:54.821: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 14:46:54.821: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "kubectl-2669".
STEP: Found 37 events.
Dec 17 14:46:54.853: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-master-74c46fb7d4-j4hvb: {default-scheduler } Scheduled: Successfully assigned kubectl-2669/agnhost-master-74c46fb7d4-j4hvb to jerma-node
Dec 17 14:46:54.853: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-mcqbr: {default-scheduler } Scheduled: Successfully assigned kubectl-2669/agnhost-slave-774cfc759f-mcqbr to jerma-server-4b75xjbddvit
Dec 17 14:46:54.853: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-rsmh7: {default-scheduler } Scheduled: Successfully assigned kubectl-2669/agnhost-slave-774cfc759f-rsmh7 to jerma-node
Dec 17 14:46:54.853: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-49tg2: {default-scheduler } Scheduled: Successfully assigned kubectl-2669/frontend-6c5f89d5d4-49tg2 to jerma-node
Dec 17 14:46:54.853: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-ffhrg: {default-scheduler } Scheduled: Successfully assigned kubectl-2669/frontend-6c5f89d5d4-ffhrg to jerma-node
Dec 17 14:46:54.853: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-vvpxw: {default-scheduler } Scheduled: Successfully assigned kubectl-2669/frontend-6c5f89d5d4-vvpxw to jerma-server-4b75xjbddvit
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:16 +0000 UTC - event for agnhost-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-master-74c46fb7d4 to 1
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:16 +0000 UTC - event for agnhost-master-74c46fb7d4: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-master-74c46fb7d4-j4hvb
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:16 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6c5f89d5d4 to 3
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:16 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-ffhrg
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:16 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-vvpxw
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:16 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-49tg2
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:19 +0000 UTC - event for agnhost-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-slave-774cfc759f to 2
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:19 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-mcqbr
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:19 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-rsmh7
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:33 +0000 UTC - event for frontend-6c5f89d5d4-vvpxw: {kubelet jerma-server-4b75xjbddvit} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:34 +0000 UTC - event for agnhost-master-74c46fb7d4-j4hvb: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:34 +0000 UTC - event for agnhost-slave-774cfc759f-mcqbr: {kubelet jerma-server-4b75xjbddvit} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:34 +0000 UTC - event for agnhost-slave-774cfc759f-rsmh7: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:39 +0000 UTC - event for frontend-6c5f89d5d4-49tg2: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:41 +0000 UTC - event for agnhost-slave-774cfc759f-mcqbr: {kubelet jerma-server-4b75xjbddvit} Created: Created container slave
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:41 +0000 UTC - event for frontend-6c5f89d5d4-vvpxw: {kubelet jerma-server-4b75xjbddvit} Created: Created container guestbook-frontend
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:42 +0000 UTC - event for agnhost-slave-774cfc759f-mcqbr: {kubelet jerma-server-4b75xjbddvit} Started: Started container slave
Dec 17 14:46:54.853: INFO: At 2019-12-17 14:43:43 +0000 UTC - event for frontend-6c5f89d5d4-vvpxw: {kubelet jerma-server-4b75xjbddvit} Started: Started container guestbook-frontend
Dec 17 14:46:54.854: INFO: At 2019-12-17 14:43:44 +0000 UTC - event for frontend-6c5f89d5d4-ffhrg: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Dec 17 14:46:54.854: INFO: At 2019-12-17 14:43:49 +0000 UTC - event for agnhost-master-74c46fb7d4-j4hvb: {kubelet jerma-node} Created: Created container master
Dec 17 14:46:54.854: INFO: At 2019-12-17 14:43:49 +0000 UTC - event for agnhost-slave-774cfc759f-rsmh7: {kubelet jerma-node} Created: Created container slave
Dec 17 14:46:54.854: INFO: At 2019-12-17 14:43:49 +0000 UTC - event for frontend-6c5f89d5d4-49tg2: {kubelet jerma-node} Created: Created container guestbook-frontend
Dec 17 14:46:54.854: INFO: At 2019-12-17 14:43:50 +0000 UTC - event for frontend-6c5f89d5d4-ffhrg: {kubelet jerma-node} Created: Created container guestbook-frontend
Dec 17 14:46:54.854: INFO: At 2019-12-17 14:43:51 +0000 UTC - event for agnhost-master-74c46fb7d4-j4hvb: {kubelet jerma-node} Started: Started container master
Dec 17 14:46:54.854: INFO: At 2019-12-17 14:43:51 +0000 UTC - event for agnhost-slave-774cfc759f-rsmh7: {kubelet jerma-node} Started: Started container slave
Dec 17 14:46:54.854: INFO: At 2019-12-17 14:43:51 +0000 UTC - event for frontend-6c5f89d5d4-49tg2: {kubelet jerma-node} Started: Started container guestbook-frontend
Dec 17 14:46:54.854: INFO: At 2019-12-17 14:43:51 +0000 UTC - event for frontend-6c5f89d5d4-ffhrg: {kubelet jerma-node} Started: Started container guestbook-frontend
Dec 17 14:46:54.854: INFO: At 2019-12-17 14:46:54 +0000 UTC - event for agnhost-master-74c46fb7d4-j4hvb: {kubelet jerma-node} Killing: Stopping container master
Dec 17 14:46:54.854: INFO: At 2019-12-17 14:46:54 +0000 UTC - event for frontend-6c5f89d5d4-49tg2: {kubelet jerma-node} Killing: Stopping container guestbook-frontend
Dec 17 14:46:54.854: INFO: At 2019-12-17 14:46:54 +0000 UTC - event for frontend-6c5f89d5d4-ffhrg: {kubelet jerma-node} Killing: Stopping container guestbook-frontend
Dec 17 14:46:54.854: INFO: At 2019-12-17 14:46:54 +0000 UTC - event for frontend-6c5f89d5d4-vvpxw: {kubelet jerma-server-4b75xjbddvit} Killing: Stopping container guestbook-frontend
Dec 17 14:46:54.866: INFO: POD NODE PHASE GRACE CONDITIONS
Dec 17 14:46:54.866: INFO: agnhost-master-74c46fb7d4-j4hvb jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:16 +0000 UTC }]
Dec 17 14:46:54.866: INFO: agnhost-slave-774cfc759f-mcqbr jerma-server-4b75xjbddvit Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:19 +0000 UTC }]
Dec 17 14:46:54.866: INFO: agnhost-slave-774cfc759f-rsmh7 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:19 +0000 UTC }]
Dec 17 14:46:54.866: INFO: frontend-6c5f89d5d4-49tg2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:16 +0000 UTC }]
Dec 17 14:46:54.866: INFO: frontend-6c5f89d5d4-ffhrg jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:16 +0000 UTC }]
Dec 17 14:46:54.866: INFO: frontend-6c5f89d5d4-vvpxw jerma-server-4b75xjbddvit Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 14:43:16 +0000 UTC }]
Dec 17 14:46:54.866: INFO:
Dec 17 14:46:54.993: INFO:
Logging node info for node jerma-node
Dec 17 14:46:55.013: INFO: Node Info: &Node{ObjectMeta:{jerma-node /api/v1/nodes/jerma-node 77a1de86-fa0a-4097-aa1b-ddd3667d796b 9103400 0 2019-10-12 13:47:49 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4136013824 0} {<nil>} 4039076Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4031156224 0} {<nil>} 3936676Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-12-16 22:16:34 +0000 UTC,LastTransitionTime:2019-12-16 22:16:34 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-12-17 14:46:39 +0000 UTC,LastTransitionTime:2019-10-12 13:47:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-12-17 14:46:39 +0000 UTC,LastTransitionTime:2019-10-12 13:47:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-12-17 14:46:39 +0000 UTC,LastTransitionTime:2019-10-12 13:47:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-12-17 14:46:39 +0000 UTC,LastTransitionTime:2019-10-12 13:48:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.170,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4eaf1504b38c4046a625a134490a5292,SystemUUID:4EAF1504-B38C-4046-A625-A134490A5292,BootID:be260572-5100-4207-9fbc-2294735ff8aa,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.16.1,KubeProxyVersion:v1.16.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2],SizeBytes:148150868,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:adb4d547241d08bbb25a928b7356b9f122c4a2e81abfe47aebdd659097e79dbc k8s.gcr.io/kube-proxy:v1.16.1],SizeBytes:86061020,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2],SizeBytes:49569458,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:1303dbf110c57f3edf68d9f5a16c082ec06c4cf7604831669faf2c712260b5a0 busybox@sha256:b91fb3b63e212bb0d3dd0461025b969705b1df565a8bd454bd5095aa7bea9221],SizeBytes:1219790,},ContainerImage{Names:[busybox@sha256:1828edd60c5efd34b2bf5dd3282ec0cc04d47b2ff9caa0b6d4f07a21d1c08084 busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Dec 17 14:46:55.014: INFO:
Logging kubelet events for node jerma-node
Dec 17 14:46:55.020: INFO:
Logging pods the kubelet thinks is on node jerma-node
Dec 17 14:46:55.037: INFO: weave-net-x498p started at 2019-12-16 22:16:24 +0000 UTC (0+2 container statuses recorded)
Dec 17 14:46:55.037: INFO: Container weave ready: true, restart count 0
Dec 17 14:46:55.037: INFO: Container weave-npc ready: true, restart count 0
Dec 17 14:46:55.037: INFO: frontend-6c5f89d5d4-ffhrg started at 2019-12-17 14:43:16 +0000 UTC (0+1 container statuses recorded)
Dec 17 14:46:55.037: INFO: Container guestbook-frontend ready: true, restart count 0
Dec 17 14:46:55.037: INFO: frontend-6c5f89d5d4-49tg2 started at 2019-12-17 14:43:16 +0000 UTC (0+1 container statuses recorded)
Dec 17 14:46:55.037: INFO: Container guestbook-frontend ready: true, restart count 0
Dec 17 14:46:55.037: INFO: agnhost-slave-774cfc759f-rsmh7 started at 2019-12-17 14:43:22 +0000 UTC (0+1 container statuses recorded)
Dec 17 14:46:55.037: INFO: Container slave ready: true, restart count 0
Dec 17 14:46:55.037: INFO: agnhost-master-74c46fb7d4-j4hvb started at 2019-12-17 14:43:16 +0000 UTC (0+1 container statuses recorded)
Dec 17 14:46:55.037: INFO: Container master ready: true, restart count 0
Dec 17 14:46:55.037: INFO: kube-proxy-jcjl4 started at 2019-10-12 13:47:49 +0000 UTC (0+1 container statuses recorded)
Dec 17 14:46:55.037: INFO: Container kube-proxy ready: true, restart count 0
Dec 17 14:46:55.075: INFO:
Latency metrics for node jerma-node
Dec 17 14:46:55.076: INFO:
Logging node info for node jerma-server-4b75xjbddvit
Dec 17 14:46:55.083: INFO: Node Info: &Node{ObjectMeta:{jerma-server-4b75xjbddvit /api/v1/nodes/jerma-server-4b75xjbddvit 65247a99-359d-4f89-a587-9b1e2846985b 9103409 0 2019-10-12 13:29:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-4b75xjbddvit kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4136026112 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4031168512 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-12-13 09:17:15 +0000 UTC,LastTransitionTime:2019-12-13 09:17:15 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-12-17 14:46:46 +0000 UTC,LastTransitionTime:2019-10-12 13:29:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-12-17 14:46:46 +0000 UTC,LastTransitionTime:2019-12-13 09:12:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-12-17 14:46:46 +0000 UTC,LastTransitionTime:2019-10-12 13:29:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-12-17 14:46:46 +0000 UTC,LastTransitionTime:2019-10-12 13:29:53 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.3.35,},NodeAddress{Type:Hostname,Address:jerma-server-4b75xjbddvit,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c617e976dd6040539102788a191b2ea4,SystemUUID:C617E976-DD60-4053-9102-788A191B2EA4,BootID:b7792a6d-7352-4851-9822-f2fa8fe18763,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.16.1,KubeProxyVersion:v1.16.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15 k8s.gcr.io/etcd:3.3.15-0],SizeBytes:246640776,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:80feeaed6c6445ab0ea0c27153354c3cac19b8b028d9b14fc134f947e716e25e k8s.gcr.io/kube-apiserver:v1.16.1],SizeBytes:217083230,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:36259393d3c7cb84a6420db94dccfc75faa8adc9841142467691b7123ab4e8b8 k8s.gcr.io/kube-controller-manager:v1.16.1],SizeBytes:163318238,},ContainerImage{Names:[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2],SizeBytes:148150868,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:c51d0cff4c90fd1ed1e0c62509c4bee2035f8815c68ed355d3643f0db3d084a9 k8s.gcr.io/kube-scheduler:v1.16.1],SizeBytes:87269918,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:adb4d547241d08bbb25a928b7356b9f122c4a2e81abfe47aebdd659097e79dbc k8s.gcr.io/kube-proxy:v1.16.1],SizeBytes:86061020,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2],SizeBytes:49569458,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Dec 17 14:46:55.084: INFO:
Logging kubelet events for node jerma-server-4b75xjbddvit
Dec 17 14:46:55.169: INFO:
Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit
Dec 17 14:46:55.190: INFO: agnhost-slave-774cfc759f-mcqbr started at 2019-12-17 14:43:22 +0000 UTC (0+1 container statuses recorded)
Dec 17 14:46:55.190: INFO: Container slave ready: true, restart count 0
Dec 17 14:46:55.190: INFO: coredns-5644d7b6d9-9sj58 started at 2019-12-14 15:12:12 +0000 UTC (0+1 container statuses recorded)
Dec 17 14:46:55.190: INFO: Container coredns ready: true, restart count 0
Dec 17 14:46:55.190: INFO: kube-scheduler-jerma-server-4b75xjbddvit started at 2019-10-12 13:28:42 +0000 UTC (0+1 container statuses recorded)
Dec 17 14:46:55.190: INFO: Container kube-scheduler ready: true, restart count 11
Dec 17 14:46:55.190: INFO: kube-proxy-bdcvr started at 2019-12-13 09:08:20 +0000 UTC (0+1 container statuses recorded)
Dec 17 14:46:55.190: INFO: Container kube-proxy ready: true, restart count 0
Dec 17 14:46:55.190: INFO: coredns-5644d7b6d9-xvlxj started at 2019-12-14 16:49:52 +0000 UTC (0+1 container statuses recorded)
Dec 17 14:46:55.190: INFO: Container coredns ready: true, restart count 0
Dec 17 14:46:55.190: INFO: frontend-6c5f89d5d4-vvpxw started at 2019-12-17 14:43:16 +0000 UTC (0+1 container statuses recorded)
Dec 17 14:46:55.190: INFO: Container guestbook-frontend ready: true, restart count 0
Dec 17 14:46:55.190: INFO: etcd-jerma-server-4b75xjbddvit started at 2019-10-12 13:28:37 +0000 UTC (0+1 container statuses recorded)
Dec 17 14:46:55.190: INFO: Container etcd ready: true, restart count 1
Dec 17 14:46:55.190: INFO: kube-controller-manager-jerma-server-4b75xjbddvit started at 2019-10-12 13:28:40 +0000 UTC (0+1 container statuses recorded)
Dec 17 14:46:55.190: INFO: Container kube-controller-manager ready: true, restart count 8
Dec 17 14:46:55.190: INFO: kube-apiserver-jerma-server-4b75xjbddvit started at 2019-10-12 13:28:38 +0000 UTC (0+1 container statuses recorded)
Dec 17 14:46:55.190: INFO: Container kube-apiserver ready: true, restart count 1
Dec 17 14:46:55.190: INFO: coredns-5644d7b6d9-n9kkw started at 2019-11-10 16:39:08 +0000 UTC (0+0 container statuses recorded)
Dec 17 14:46:55.190: INFO: coredns-5644d7b6d9-rqwzj started at 2019-11-10 18:03:38 +0000 UTC (0+0 container statuses recorded)
Dec 17 14:46:55.190: INFO: weave-net-gsjjk started at 2019-12-13 09:16:56 +0000 UTC (0+2 container statuses recorded)
Dec 17 14:46:55.190: INFO: Container weave ready: true, restart count 0
Dec 17 14:46:55.190: INFO: Container weave-npc ready: true, restart count 0
Dec 17 14:46:55.223: INFO:
Latency metrics for node jerma-server-4b75xjbddvit
Dec 17 14:46:55.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2669" for this suite.