/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
May 12 14:27:52.647: Pod ss-0 expected to be re-created at least once
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
May 12 14:22:48.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4667
[It] Should recreate evicted statefulset [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4667
STEP: Creating statefulset with conflicting port in namespace statefulset-4667
STEP: Waiting until pod test-pod will start running in namespace statefulset-4667
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4667
May 12 14:22:52.647: INFO: Observed stateful pod in namespace: statefulset-4667, name: ss-0, uid: c54ce0f2-a66d-402d-b8e9-5cef16aa948e, status phase: Pending. Waiting for statefulset controller to delete.
May 12 14:27:52.647: INFO: Pod ss-0 expected to be re-created at least once
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
May 12 14:27:52.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-4667'
May 12 14:27:55.460: INFO: stderr: ""
May 12 14:27:55.460: INFO: stdout: "Name: ss-0\nNamespace: statefulset-4667\nPriority: 0\nNode: iruya-worker/\nLabels: baz=blah\n controller-revision-hash=ss-5867494796\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: <none>\nStatus: Pending\nIP: \nControlled By: StatefulSet/ss\nContainers:\n nginx:\n Image: docker.io/library/nginx:1.14-alpine\n Port: 21017/TCP\n Host Port: 21017/TCP\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-qs6sv (ro)\nVolumes:\n default-token-qs6sv:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-qs6sv\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning PodFitsHostPorts 5m3s kubelet, iruya-worker Predicate PodFitsHostPorts failed\n"
May 12 14:27:55.460: INFO:
Output of kubectl describe ss-0:
Name: ss-0
Namespace: statefulset-4667
Priority: 0
Node: iruya-worker/
Labels: baz=blah
controller-revision-hash=ss-5867494796
foo=bar
statefulset.kubernetes.io/pod-name=ss-0
Annotations: <none>
Status: Pending
IP:
Controlled By: StatefulSet/ss
Containers:
nginx:
Image: docker.io/library/nginx:1.14-alpine
Port: 21017/TCP
Host Port: 21017/TCP
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qs6sv (ro)
Volumes:
default-token-qs6sv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qs6sv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning PodFitsHostPorts 5m3s kubelet, iruya-worker Predicate PodFitsHostPorts failed
May 12 14:27:55.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-4667 --tail=100'
May 12 14:27:55.562: INFO: rc: 1
May 12 14:27:55.562: INFO:
Last 100 log lines of ss-0:
May 12 14:27:55.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-4667'
May 12 14:27:55.669: INFO: stderr: ""
May 12 14:27:55.669: INFO: stdout: "Name: test-pod\nNamespace: statefulset-4667\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Tue, 12 May 2020 14:22:48 +0000\nLabels: <none>\nAnnotations: <none>\nStatus: Running\nIP: 10.244.2.133\nContainers:\n nginx:\n Container ID: containerd://f7ecc5c5c135b773c0c3b90887526554b1c8c9a07146cf7ef9ede3a87d139440\n Image: docker.io/library/nginx:1.14-alpine\n Image ID: docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Tue, 12 May 2020 14:22:51 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-qs6sv (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-qs6sv:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-qs6sv\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulled 5m6s kubelet, iruya-worker Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n Normal Created 5m4s kubelet, iruya-worker Created container nginx\n Normal Started 5m4s kubelet, iruya-worker Started container nginx\n"
May 12 14:27:55.669: INFO:
Output of kubectl describe test-pod:
Name: test-pod
Namespace: statefulset-4667
Priority: 0
Node: iruya-worker/172.17.0.6
Start Time: Tue, 12 May 2020 14:22:48 +0000
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.2.133
Containers:
nginx:
Container ID: containerd://f7ecc5c5c135b773c0c3b90887526554b1c8c9a07146cf7ef9ede3a87d139440
Image: docker.io/library/nginx:1.14-alpine
Image ID: docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
Port: 21017/TCP
Host Port: 21017/TCP
State: Running
Started: Tue, 12 May 2020 14:22:51 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qs6sv (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-qs6sv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qs6sv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 5m6s kubelet, iruya-worker Container image "docker.io/library/nginx:1.14-alpine" already present on machine
Normal Created 5m4s kubelet, iruya-worker Created container nginx
Normal Started 5m4s kubelet, iruya-worker Started container nginx
May 12 14:27:55.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-4667 --tail=100'
May 12 14:27:55.774: INFO: stderr: ""
May 12 14:27:55.774: INFO: stdout: ""
May 12 14:27:55.774: INFO:
Last 100 log lines of test-pod:
May 12 14:27:55.774: INFO: Deleting all statefulset in ns statefulset-4667
May 12 14:27:55.776: INFO: Scaling statefulset ss to 0
May 12 14:28:05.786: INFO: Waiting for statefulset status.replicas updated to 0
May 12 14:28:05.788: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Collecting events from namespace "statefulset-4667".
STEP: Found 14 events.
May 12 14:28:05.807: INFO: At 2020-05-12 14:22:48 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful
May 12 14:28:05.807: INFO: At 2020-05-12 14:22:48 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
May 12 14:28:05.807: INFO: At 2020-05-12 14:22:48 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-4667/ss is recreating failed Pod ss-0
May 12 14:28:05.807: INFO: At 2020-05-12 14:22:48 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed
May 12 14:28:05.807: INFO: At 2020-05-12 14:22:48 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed
May 12 14:28:05.807: INFO: At 2020-05-12 14:22:48 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed
May 12 14:28:05.807: INFO: At 2020-05-12 14:22:49 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.
May 12 14:28:05.807: INFO: At 2020-05-12 14:22:49 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed
May 12 14:28:05.807: INFO: At 2020-05-12 14:22:49 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed
May 12 14:28:05.807: INFO: At 2020-05-12 14:22:49 +0000 UTC - event for test-pod: {kubelet iruya-worker} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine
May 12 14:28:05.807: INFO: At 2020-05-12 14:22:51 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed
May 12 14:28:05.807: INFO: At 2020-05-12 14:22:51 +0000 UTC - event for test-pod: {kubelet iruya-worker} Created: Created container nginx
May 12 14:28:05.807: INFO: At 2020-05-12 14:22:51 +0000 UTC - event for test-pod: {kubelet iruya-worker} Started: Started container nginx
May 12 14:28:05.807: INFO: At 2020-05-12 14:22:52 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed
May 12 14:28:05.809: INFO: POD NODE PHASE GRACE CONDITIONS
May 12 14:28:05.809: INFO: test-pod iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 14:22:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 14:22:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 14:22:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 14:22:48 +0000 UTC }]
May 12 14:28:05.809: INFO:
May 12 14:28:05.815: INFO:
Logging node info for node iruya-control-plane
May 12 14:28:05.817: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-control-plane,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-control-plane,UID:5b69a0f9-55ac-48be-a8d0-5e04b939b798,ResourceVersion:10498503,Generation:0,CreationTimestamp:2020-03-15 18:24:20 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-control-plane,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {<nil>} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {<nil>} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{134922129408 0} {<nil>} 131759892Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {<nil>} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {<nil>} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{134922129408 0} {<nil>} 131759892Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-12 14:27:09 +0000 UTC 2020-03-15 18:24:20 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-12 14:27:09 +0000 UTC 2020-03-15 18:24:20 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-12 14:27:09 +0000 UTC 2020-03-15 18:24:20 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-12 14:27:09 +0000 UTC 2020-03-15 18:25:00 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.7} {Hostname iruya-control-plane}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09f14f6f4d1640fcaab2243401c9f154,SystemUUID:7c6ca533-492e-400c-b058-c282f97a69ec,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.15.7,KubeProxyVersion:v1.15.7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.7] 249088818} {[k8s.gcr.io/kube-controller-manager:v1.15.7] 199886660} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.7] 97350830} {[k8s.gcr.io/kube-scheduler:v1.15.7] 96554801} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[k8s.gcr.io/pause:3.1] 746479}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
May 12 14:28:05.817: INFO:
Logging kubelet events for node iruya-control-plane
May 12 14:28:05.819: INFO:
Logging pods the kubelet thinks is on node iruya-control-plane
May 12 14:28:05.825: INFO: local-path-provisioner-d4947b89c-72frh started at 2020-03-15 18:25:04 +0000 UTC (0+1 container statuses recorded)
May 12 14:28:05.825: INFO: Container local-path-provisioner ready: true, restart count 0
May 12 14:28:05.825: INFO: kube-apiserver-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded)
May 12 14:28:05.825: INFO: Container kube-apiserver ready: true, restart count 0
May 12 14:28:05.825: INFO: kube-controller-manager-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded)
May 12 14:28:05.825: INFO: Container kube-controller-manager ready: true, restart count 0
May 12 14:28:05.826: INFO: kube-scheduler-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded)
May 12 14:28:05.826: INFO: Container kube-scheduler ready: true, restart count 0
May 12 14:28:05.826: INFO: etcd-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded)
May 12 14:28:05.826: INFO: Container etcd ready: true, restart count 0
May 12 14:28:05.826: INFO: kindnet-zn8sx started at 2020-03-15 18:24:40 +0000 UTC (0+1 container statuses recorded)
May 12 14:28:05.826: INFO: Container kindnet-cni ready: true, restart count 0
May 12 14:28:05.826: INFO: kube-proxy-46nsr started at 2020-03-15 18:24:40 +0000 UTC (0+1 container statuses recorded)
May 12 14:28:05.826: INFO: Container kube-proxy ready: true, restart count 0
May 12 14:28:05.900: INFO:
Latency metrics for node iruya-control-plane
May 12 14:28:05.900: INFO:
Logging node info for node iruya-worker
May 12 14:28:05.903: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-worker,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-worker,UID:94e58020-6063-4274-b0bd-d7c4f772701c,ResourceVersion:10498532,Generation:0,CreationTimestamp:2020-03-15 18:24:54 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-worker,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {<nil>} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {<nil>} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{134922129408 0} {<nil>} 131759892Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {<nil>} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {<nil>} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{134922129408 0} {<nil>} 131759892Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-12 14:27:23 +0000 UTC 2020-03-15 18:24:54 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-12 14:27:23 +0000 UTC 2020-03-15 18:24:54 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-12 14:27:23 +0000 UTC 2020-03-15 18:24:54 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-12 14:27:23 +0000 UTC 2020-03-15 18:25:15 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.6} {Hostname iruya-worker}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5332b21b7d0c4f35b2434f4fc8bea1cf,SystemUUID:92e1ff09-3c3c-490b-b499-0de27dc489ae,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.15.7,KubeProxyVersion:v1.15.7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.7] 249088818} {[k8s.gcr.io/kube-controller-manager:v1.15.7] 199886660} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.7] 97350830} {[k8s.gcr.io/kube-scheduler:v1.15.7] 96554801} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 16222606} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 2258365} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
May 12 14:28:05.903: INFO:
Logging kubelet events for node iruya-worker
May 12 14:28:05.906: INFO:
Logging pods the kubelet thinks is on node iruya-worker
May 12 14:28:05.911: INFO: kube-proxy-pmz4p started at 2020-03-15 18:24:55 +0000 UTC (0+1 container statuses recorded)
May 12 14:28:05.911: INFO: Container kube-proxy ready: true, restart count 0
May 12 14:28:05.911: INFO: kindnet-gwz5g started at 2020-03-15 18:24:55 +0000 UTC (0+1 container statuses recorded)
May 12 14:28:05.911: INFO: Container kindnet-cni ready: true, restart count 0
May 12 14:28:05.911: INFO: test-pod started at 2020-05-12 14:22:48 +0000 UTC (0+1 container statuses recorded)
May 12 14:28:05.911: INFO: Container nginx ready: true, restart count 0
May 12 14:28:05.960: INFO:
Latency metrics for node iruya-worker
May 12 14:28:05.960: INFO:
Logging node info for node iruya-worker2
May 12 14:28:05.963: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-worker2,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-worker2,UID:67dfdf76-d64a-45cb-a2a9-755b73c85644,ResourceVersion:10498528,Generation:0,CreationTimestamp:2020-03-15 18:24:41 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-worker2,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {<nil>} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {<nil>} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{134922129408 0} {<nil>} 131759892Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {<nil>} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {<nil>} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{134922129408 0} {<nil>} 131759892Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-12 14:27:21 +0000 UTC 2020-03-15 18:24:41 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-12 14:27:21 +0000 UTC 2020-03-15 18:24:41 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-12 14:27:21 +0000 UTC 2020-03-15 18:24:41 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-12 14:27:21 +0000 UTC 2020-03-15 18:24:52 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.5} {Hostname iruya-worker2}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5fda03f0d02548b7a74f8a4b6cc8795b,SystemUUID:d8b7a3a5-76b4-4c0b-85d7-cdb97f2c8b1a,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.15.7,KubeProxyVersion:v1.15.7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.7] 249088818} {[k8s.gcr.io/kube-controller-manager:v1.15.7] 199886660} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.7] 97350830} {[k8s.gcr.io/kube-scheduler:v1.15.7] 96554801} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 16222606} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 2258365} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
May 12 14:28:05.964: INFO:
Logging kubelet events for node iruya-worker2
May 12 14:28:05.966: INFO:
Logging pods the kubelet thinks is on node iruya-worker2
May 12 14:28:05.971: INFO: coredns-5d4dd4b4db-gm7vr started at 2020-03-15 18:24:52 +0000 UTC (0+1 container statuses recorded)
May 12 14:28:05.971: INFO: Container coredns ready: true, restart count 0
May 12 14:28:05.971: INFO: coredns-5d4dd4b4db-6jcgz started at 2020-03-15 18:24:54 +0000 UTC (0+1 container statuses recorded)
May 12 14:28:05.971: INFO: Container coredns ready: true, restart count 0
May 12 14:28:05.971: INFO: kube-proxy-vwbcj started at 2020-03-15 18:24:42 +0000 UTC (0+1 container statuses recorded)
May 12 14:28:05.971: INFO: Container kube-proxy ready: true, restart count 0
May 12 14:28:05.971: INFO: kindnet-mgd8b started at 2020-03-15 18:24:43 +0000 UTC (0+1 container statuses recorded)
May 12 14:28:05.971: INFO: Container kindnet-cni ready: true, restart count 0
May 12 14:28:06.061: INFO:
Latency metrics for node iruya-worker2
May 12 14:28:06.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4667" for this suite.
May 12 14:28:28.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
May 12 14:28:28.141: INFO: namespace statefulset-4667 deletion completed in 22.076841526s