/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 26 22:24:07.811: Pod ss-0 expected to be re-created at least once
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:742
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 26 22:18:59.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1447
[It] Should recreate evicted statefulset [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1447
STEP: Creating statefulset with conflicting port in namespace statefulset-1447
STEP: Waiting until pod test-pod will start running in namespace statefulset-1447
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1447
Jan 26 22:19:07.805: INFO: Observed stateful pod in namespace: statefulset-1447, name: ss-0, uid: 440bd979-88d7-4816-b0c2-a20b8620aa97, status phase: Pending. Waiting for statefulset controller to delete.
Jan 26 22:24:07.811: FAIL: Pod ss-0 expected to be re-created at least once
Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func10.2.12()
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:742 +0x11ba
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002001300)
_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a
k8s.io/kubernetes/test/e2e.TestE2E(0xc002001300)
_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b
testing.tRunner(0xc002001300, 0x4c30de8)
/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:960 +0x350
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 26 22:24:07.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-1447'
Jan 26 22:24:08.068: INFO: stderr: ""
Jan 26 22:24:08.068: INFO: stdout: "Name: ss-0\nNamespace: statefulset-1447\nPriority: 0\nNode: jerma-node/\nLabels: baz=blah\n controller-revision-hash=ss-5c959bc8d4\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: <none>\nStatus: Pending\nIP: \nIPs: <none>\nControlled By: StatefulSet/ss\nContainers:\n webserver:\n Image: docker.io/library/httpd:2.4.38-alpine\n Port: 21017/TCP\n Host Port: 21017/TCP\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-62cvz (ro)\nVolumes:\n default-token-62cvz:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-62cvz\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning PodFitsHostPorts 5m6s kubelet, jerma-node Predicate PodFitsHostPorts failed\n"
Jan 26 22:24:08.068: INFO:
Output of kubectl describe ss-0:
Name: ss-0
Namespace: statefulset-1447
Priority: 0
Node: jerma-node/
Labels: baz=blah
controller-revision-hash=ss-5c959bc8d4
foo=bar
statefulset.kubernetes.io/pod-name=ss-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/ss
Containers:
webserver:
Image: docker.io/library/httpd:2.4.38-alpine
Port: 21017/TCP
Host Port: 21017/TCP
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-62cvz (ro)
Volumes:
default-token-62cvz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-62cvz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning PodFitsHostPorts 5m6s kubelet, jerma-node Predicate PodFitsHostPorts failed
Jan 26 22:24:08.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-1447 --tail=100'
Jan 26 22:24:08.297: INFO: rc: 1
Jan 26 22:24:08.298: INFO:
Last 100 log lines of ss-0:
Jan 26 22:24:08.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-1447'
Jan 26 22:24:08.501: INFO: stderr: ""
Jan 26 22:24:08.501: INFO: stdout: "Name: test-pod\nNamespace: statefulset-1447\nPriority: 0\nNode: jerma-node/10.96.2.250\nStart Time: Sun, 26 Jan 2020 22:18:59 +0000\nLabels: <none>\nAnnotations: <none>\nStatus: Running\nIP: 10.44.0.1\nIPs:\n IP: 10.44.0.1\nContainers:\n webserver:\n Container ID: docker://47560b914f9a8f0b4738631e85fe839c822ba39e8ccbc77bb419e7bc488f47a6\n Image: docker.io/library/httpd:2.4.38-alpine\n Image ID: docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Sun, 26 Jan 2020 22:19:06 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-62cvz (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-62cvz:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-62cvz\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulled 5m5s kubelet, jerma-node Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\n Normal Created 5m2s kubelet, jerma-node Created container webserver\n Normal Started 5m2s kubelet, jerma-node Started container webserver\n"
Jan 26 22:24:08.501: INFO:
Output of kubectl describe test-pod:
Name: test-pod
Namespace: statefulset-1447
Priority: 0
Node: jerma-node/10.96.2.250
Start Time: Sun, 26 Jan 2020 22:18:59 +0000
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.44.0.1
IPs:
IP: 10.44.0.1
Containers:
webserver:
Container ID: docker://47560b914f9a8f0b4738631e85fe839c822ba39e8ccbc77bb419e7bc488f47a6
Image: docker.io/library/httpd:2.4.38-alpine
Image ID: docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060
Port: 21017/TCP
Host Port: 21017/TCP
State: Running
Started: Sun, 26 Jan 2020 22:19:06 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-62cvz (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-62cvz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-62cvz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 5m5s kubelet, jerma-node Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Normal Created 5m2s kubelet, jerma-node Created container webserver
Normal Started 5m2s kubelet, jerma-node Started container webserver
Jan 26 22:24:08.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-1447 --tail=100'
Jan 26 22:24:08.685: INFO: stderr: ""
Jan 26 22:24:08.686: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Sun Jan 26 22:19:06.488427 2020] [mpm_event:notice] [pid 1:tid 140537277434728] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sun Jan 26 22:19:06.488563 2020] [core:notice] [pid 1:tid 140537277434728] AH00094: Command line: 'httpd -D FOREGROUND'\n"
Jan 26 22:24:08.686: INFO:
Last 100 log lines of test-pod:
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message
[Sun Jan 26 22:19:06.488427 2020] [mpm_event:notice] [pid 1:tid 140537277434728] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations
[Sun Jan 26 22:19:06.488563 2020] [core:notice] [pid 1:tid 140537277434728] AH00094: Command line: 'httpd -D FOREGROUND'
Jan 26 22:24:08.686: INFO: Deleting all statefulset in ns statefulset-1447
Jan 26 22:24:08.691: INFO: Scaling statefulset ss to 0
Jan 26 22:24:18.721: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 22:24:18.727: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "statefulset-1447".
STEP: Found 12 events.
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:18:59 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-1447/ss is recreating failed Pod ss-0
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:18:59 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:18:59 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:18:59 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:18:59 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:19:00 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:19:00 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:19:02 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:19:02 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:19:03 +0000 UTC - event for test-pod: {kubelet jerma-node} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:19:06 +0000 UTC - event for test-pod: {kubelet jerma-node} Created: Created container webserver
Jan 26 22:24:18.751: INFO: At 2020-01-26 22:19:06 +0000 UTC - event for test-pod: {kubelet jerma-node} Started: Started container webserver
Jan 26 22:24:18.759: INFO: POD NODE PHASE GRACE CONDITIONS
Jan 26 22:24:18.759: INFO: test-pod jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 22:18:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 22:19:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 22:19:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 22:18:59 +0000 UTC }]
Jan 26 22:24:18.759: INFO:
Jan 26 22:24:18.765: INFO:
Logging node info for node jerma-node
Jan 26 22:24:18.788: INFO: Node Info: &Node{ObjectMeta:{jerma-node /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 4552814 0 2020-01-04 11:59:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4136013824 0} {<nil>} 4039076Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4031156224 0} {<nil>} 3936676Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-26 22:22:12 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-26 22:22:12 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-26 22:22:12 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-26 22:22:12 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jan 26 22:24:18.790: INFO:
Logging kubelet events for node jerma-node
Jan 26 22:24:18.793: INFO:
Logging pods the kubelet thinks is on node jerma-node
Jan 26 22:24:18.802: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.802: INFO: Container kube-proxy ready: true, restart count 0
Jan 26 22:24:18.802: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded)
Jan 26 22:24:18.802: INFO: Container weave ready: true, restart count 1
Jan 26 22:24:18.802: INFO: Container weave-npc ready: true, restart count 0
Jan 26 22:24:18.802: INFO: test-pod started at 2020-01-26 22:18:59 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.802: INFO: Container webserver ready: true, restart count 0
Jan 26 22:24:18.856: INFO:
Latency metrics for node jerma-node
Jan 26 22:24:18.856: INFO:
Logging node info for node jerma-server-mvvl6gufaqub
Jan 26 22:24:18.865: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 4552904 0 2020-01-04 11:47:40 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4136013824 0} {<nil>} 4039076Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4031156224 0} {<nil>} 3936676Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-26 22:22:51 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-26 22:22:51 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-26 22:22:51 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-26 22:22:51 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jan 26 22:24:18.867: INFO:
Logging kubelet events for node jerma-server-mvvl6gufaqub
Jan 26 22:24:18.872: INFO:
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub
Jan 26 22:24:18.903: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.903: INFO: Container kube-apiserver ready: true, restart count 1
Jan 26 22:24:18.903: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.903: INFO: Container etcd ready: true, restart count 1
Jan 26 22:24:18.903: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.903: INFO: Container coredns ready: true, restart count 0
Jan 26 22:24:18.903: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.903: INFO: Container coredns ready: true, restart count 0
Jan 26 22:24:18.904: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.904: INFO: Container kube-controller-manager ready: true, restart count 3
Jan 26 22:24:18.904: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.904: INFO: Container kube-proxy ready: true, restart count 0
Jan 26 22:24:18.904: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded)
Jan 26 22:24:18.904: INFO: Container weave ready: true, restart count 0
Jan 26 22:24:18.904: INFO: Container weave-npc ready: true, restart count 0
Jan 26 22:24:18.904: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Jan 26 22:24:18.904: INFO: Container kube-scheduler ready: true, restart count 4
Jan 26 22:24:18.977: INFO:
Latency metrics for node jerma-server-mvvl6gufaqub
Jan 26 22:24:18.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1447" for this suite.