I0512 19:59:14.931047 22 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0512 19:59:14.931174 22 e2e.go:129] Starting e2e run "f4d841aa-57d0-4b00-8777-8ed07b9d4633" on Ginkgo node 1 {"msg":"Test Suite starting","total":4,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1620849553 - Will randomize all specs Will run 4 of 5484 specs May 12 19:59:14.945: INFO: >>> kubeConfig: /root/.kube/config May 12 19:59:14.950: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 12 19:59:14.978: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 12 19:59:15.041: INFO: The status of Pod cmk-init-discover-node1-2x2zk is Succeeded, skipping waiting May 12 19:59:15.041: INFO: The status of Pod cmk-init-discover-node2-qrd9v is Succeeded, skipping waiting May 12 19:59:15.041: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 12 19:59:15.041: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 12 19:59:15.041: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 12 19:59:15.051: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 12 19:59:15.051: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 12 19:59:15.051: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 12 19:59:15.051: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 12 19:59:15.051: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 12 19:59:15.051: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 12 19:59:15.051: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 12 19:59:15.051: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 12 19:59:15.051: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 12 19:59:15.051: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 12 19:59:15.051: INFO: e2e test version: v1.19.10 May 12 19:59:15.052: INFO: kube-apiserver version: v1.19.8 May 12 19:59:15.052: INFO: >>> kubeConfig: /root/.kube/config May 12 19:59:15.057: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 19:59:15.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption May 12 19:59:15.103: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 12 19:59:15.106: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:68 [It] evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 STEP: Waiting for the pdb to be processed STEP: locating a running pod May 12 20:09:17.130: FAIL: Unexpected error: <*errors.errorString | 0xc0002bc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func5.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:241 +0x16a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001136300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001136300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001136300, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "disruption-4326". STEP: Found 4 events. May 12 20:09:17.135: INFO: At 2021-05-12 19:59:15 +0000 UTC - event for foo: {controllermanager } NoPods: No matching pods found May 12 20:09:17.135: INFO: At 2021-05-12 19:59:15 +0000 UTC - event for rs: {replicaset-controller } FailedCreate: Error creating: pods "rs-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: []] May 12 20:09:17.135: INFO: At 2021-05-12 19:59:15 +0000 UTC - event for rs: {replicaset-controller } FailedCreate: Error creating: pods "rs-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9103-9104]] May 12 20:09:17.135: INFO: At 2021-05-12 20:04:42 +0000 UTC - event for rs: {replicaset-controller } FailedCreate: Error creating: pods "rs-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9100]] May 12 20:09:17.137: INFO: POD NODE PHASE GRACE CONDITIONS May 12 20:09:17.137: INFO: May 12 20:09:17.142: INFO: Logging node info for node master1 May 12 20:09:17.144: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 03e64d13-444c-41eb-b6bd-3745f05cd1cd 78192 0 2021-05-12 16:30:43 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"ce:94:4b:fc:cf:bc"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-12 16:30:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-12 16:30:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-05-12 16:31:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-05-12 16:33:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:35:27 +0000 UTC,LastTransitionTime:2021-05-12 16:35:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 20:09:14 +0000 UTC,LastTransitionTime:2021-05-12 16:30:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 20:09:14 +0000 UTC,LastTransitionTime:2021-05-12 16:30:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 20:09:14 +0000 UTC,LastTransitionTime:2021-05-12 16:30:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 20:09:14 +0000 UTC,LastTransitionTime:2021-05-12 16:35:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:201a06a0695b47fdbdd7df0fdd94f4dc,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a7dddfd4-eda7-4a08-9750-d4a3c87c0cdd,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:60f4b5001bb5e7280fddf9143d3ed9bcde4e8016eef54522b5aea6bac9d9774b tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:1935bf73835eb6d4446668a5484bb3724d97d926b26b41c7bf064aa3a5a8bc5f tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 20:09:17.144: INFO: Logging kubelet events for node master1 May 12 20:09:17.146: INFO: Logging pods the kubelet thinks is on node master1 May 12 20:09:17.164: INFO: kube-apiserver-master1 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.164: INFO: Container kube-apiserver ready: true, restart count 0 May 12 20:09:17.164: INFO: kube-scheduler-master1 started at 2021-05-12 16:32:00 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.164: INFO: Container kube-scheduler ready: true, restart count 0 May 12 20:09:17.164: INFO: kube-proxy-v5zpq started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.164: INFO: Container kube-proxy ready: true, restart count 2 May 12 20:09:17.164: INFO: kube-flannel-rc2gh started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 20:09:17.164: INFO: Init container install-cni ready: true, restart count 2 May 12 20:09:17.164: INFO: Container kube-flannel ready: true, restart count 3 May 12 20:09:17.164: INFO: coredns-7677f9bb54-4lkxs started at 2021-05-12 16:33:53 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.164: INFO: Container coredns ready: true, restart count 2 May 12 20:09:17.164: INFO: docker-registry-docker-registry-56cbc7bc58-lfjnt started at 2021-05-12 16:36:16 +0000 UTC (0+2 container statuses recorded) May 12 20:09:17.164: INFO: Container docker-registry ready: true, restart count 0 May 12 20:09:17.164: INFO: Container nginx ready: true, restart count 0 May 12 20:09:17.164: INFO: kube-controller-manager-master1 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.164: INFO: Container kube-controller-manager ready: true, restart count 2 May 12 20:09:17.164: INFO: kube-multus-ds-amd64-rcwwm started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.164: INFO: Container kube-multus ready: true, restart count 1 May 12 20:09:17.164: INFO: node-exporter-wd9j2 started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 20:09:17.164: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:09:17.164: INFO: Container node-exporter ready: true, restart count 0 W0512 20:09:17.179443 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 20:09:17.212: INFO: Latency metrics for node master1 May 12 20:09:17.212: INFO: Logging node info for node master2 May 12 20:09:17.214: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 f3d8b0a9-b7ca-4942-bfc7-69b182fddff8 78186 0 2021-05-12 16:31:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:9d:92:dd:24:1d"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-12 16:31:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-12 16:31:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-12 16:33:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-12 16:33:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:36:36 +0000 UTC,LastTransitionTime:2021-05-12 16:36:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 20:09:12 +0000 UTC,LastTransitionTime:2021-05-12 16:31:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 20:09:12 +0000 UTC,LastTransitionTime:2021-05-12 16:31:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 20:09:12 +0000 UTC,LastTransitionTime:2021-05-12 16:31:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 20:09:12 +0000 UTC,LastTransitionTime:2021-05-12 16:33:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ef3726c355844802be21d744dd831d84,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:6feec048-83de-4ff4-ae36-12985bcda218,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 20:09:17.215: INFO: Logging kubelet events for node master2 May 12 20:09:17.217: INFO: Logging pods the kubelet thinks is on node master2 May 12 20:09:17.231: INFO: coredns-7677f9bb54-b628b started at 2021-05-12 16:33:48 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.231: INFO: Container coredns ready: true, restart count 1 May 12 20:09:17.231: INFO: node-exporter-dprww started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 20:09:17.231: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:09:17.231: INFO: Container node-exporter ready: true, restart count 0 May 12 20:09:17.231: INFO: kube-apiserver-master2 started at 2021-05-12 16:38:15 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.231: INFO: Container kube-apiserver ready: true, restart count 0 May 12 20:09:17.231: INFO: kube-controller-manager-master2 started at 2021-05-12 16:38:53 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.231: INFO: Container kube-controller-manager ready: true, restart count 2 May 12 20:09:17.231: INFO: kube-scheduler-master2 started at 2021-05-12 16:32:00 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.231: INFO: Container kube-scheduler ready: true, restart count 2 May 12 20:09:17.231: INFO: kube-proxy-6ljsd started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.231: INFO: Container kube-proxy ready: true, restart count 2 May 12 20:09:17.231: INFO: kube-flannel-pppch started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 20:09:17.231: INFO: Init container install-cni ready: true, restart count 2 May 12 20:09:17.231: INFO: Container kube-flannel ready: true, restart count 1 May 12 20:09:17.232: INFO: kube-multus-ds-amd64-4swdk started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.232: INFO: Container kube-multus ready: true, restart count 1 W0512 20:09:17.245474 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 20:09:17.276: INFO: Latency metrics for node master2 May 12 20:09:17.276: INFO: Logging node info for node master3 May 12 20:09:17.278: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 a0124635-fd55-4774-af26-32765b659025 78191 0 2021-05-12 16:31:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"8e:91:ce:de:d3:10"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-12 16:31:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-12 16:31:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-12 16:33:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-12 16:33:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-12 16:39:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:35:42 +0000 UTC,LastTransitionTime:2021-05-12 16:35:42 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 20:09:13 +0000 UTC,LastTransitionTime:2021-05-12 16:31:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 20:09:13 +0000 UTC,LastTransitionTime:2021-05-12 16:31:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 20:09:13 +0000 UTC,LastTransitionTime:2021-05-12 16:31:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 20:09:13 +0000 UTC,LastTransitionTime:2021-05-12 16:35:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8966dda31ac243c3b9142da7eb1a3315,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:29bf94f6-9eaa-4435-8e2a-f17b4721b8df,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 20:09:17.278: INFO: Logging kubelet events for node master3 May 12 20:09:17.280: INFO: Logging pods the kubelet thinks is on node master3 May 12 20:09:17.295: INFO: node-feature-discovery-controller-5bf5c49849-gdz9z started at 2021-05-12 16:38:58 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.295: INFO: Container nfd-controller ready: true, restart count 0 May 12 20:09:17.295: INFO: node-exporter-jzp84 started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 20:09:17.295: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:09:17.295: INFO: Container node-exporter ready: true, restart count 0 May 12 20:09:17.295: INFO: kube-scheduler-master3 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.295: INFO: Container kube-scheduler ready: true, restart count 3 May 12 20:09:17.295: INFO: kube-proxy-d57dm started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.295: INFO: Container kube-proxy ready: true, restart count 1 May 12 20:09:17.295: INFO: kube-flannel-hmbzn started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 20:09:17.295: INFO: Init container install-cni ready: true, restart count 2 May 12 20:09:17.295: INFO: Container kube-flannel ready: true, restart count 2 May 12 20:09:17.295: INFO: dns-autoscaler-5b7b5c9b6f-8fzpk started at 2021-05-12 16:33:51 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.295: INFO: Container autoscaler ready: true, restart count 1 May 12 20:09:17.295: INFO: kube-apiserver-master3 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.295: INFO: Container kube-apiserver ready: true, restart count 0 May 12 20:09:17.295: INFO: kube-controller-manager-master3 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.295: INFO: Container kube-controller-manager ready: true, restart count 2 May 12 20:09:17.295: INFO: kube-multus-ds-amd64-4hlf8 started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.295: INFO: Container kube-multus ready: true, restart count 1 May 12 20:09:17.295: INFO: prometheus-operator-5bb8cb9d8f-sqgl4 started at 2021-05-12 16:42:55 +0000 UTC (0+2 container statuses recorded) May 12 20:09:17.295: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:09:17.295: INFO: Container prometheus-operator ready: true, restart count 0 W0512 20:09:17.306474 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 20:09:17.337: INFO: Latency metrics for node master3 May 12 20:09:17.337: INFO: Logging node info for node node1 May 12 20:09:17.340: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 2f7b1644-6865-412d-b423-194903813633 78177 0 2021-05-12 16:32:43 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"6a:b5:62:81:d9:8e"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-12 16:32:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-12 16:32:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-12 16:33:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-12 16:39:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-12 16:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-12 19:02:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:36:30 +0000 UTC,LastTransitionTime:2021-05-12 16:36:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 20:09:08 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 20:09:08 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 20:09:08 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 20:09:08 +0000 UTC,LastTransitionTime:2021-05-12 16:33:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d9a9ef88c91340689744923e9951f78a,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:d2f244db-a37b-4dba-a8f9-6ea24c6cdb4e,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:96570ae3225d359c9667e2cbd57987388aa9d3be9d5a198ed9b677c7f9f4e450 localhost:30500/barometer-collectd:stable],SizeBytes:1464260582,},ContainerImage{Names:[@ :],SizeBytes:1002488025,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f178577724ef3b8118ca58277a27d594257de6c55a813b61e421df4008a7f73b cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:1636899c10870ab66c48d960a9df620f4f9e86a0c72fbacf36032d27404e7e6c golang:alpine3.12],SizeBytes:301156062,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:34ac17eb150f557f6ce94ea34e7c03f3899b717307cad633213a9488a431179f nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:155361826d160d9f566aede3ea35b34f1e0c6422720285b1c22467c2b21a90aa nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392919,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 20:09:17.340: INFO: Logging kubelet events for node node1 May 12 20:09:17.345: INFO: Logging pods the kubelet thinks is on node node1 May 12 20:09:17.365: INFO: cmk-init-discover-node1-2x2zk started at 2021-05-12 16:41:25 +0000 UTC (0+3 container statuses recorded) May 12 20:09:17.365: INFO: Container discover ready: false, restart count 0 May 12 20:09:17.365: INFO: Container init ready: false, restart count 0 May 12 20:09:17.365: INFO: Container install ready: false, restart count 0 May 12 20:09:17.365: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff started at 2021-05-12 16:39:41 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.365: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 20:09:17.365: INFO: node-feature-discovery-worker-qtn84 started at 2021-05-12 16:38:48 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.365: INFO: Container nfd-worker ready: true, restart count 0 May 12 20:09:17.365: INFO: nginx-proxy-node1 started at 2021-05-12 16:38:15 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.365: INFO: Container nginx-proxy ready: true, restart count 2 May 12 20:09:17.365: INFO: kubernetes-dashboard-86c6f9df5b-vkvbq started at 2021-05-12 16:33:53 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.365: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 12 20:09:17.365: INFO: collectd-5mpmz started at 2021-05-12 16:49:38 +0000 UTC (0+3 container statuses recorded) May 12 20:09:17.365: INFO: Container collectd ready: true, restart count 0 May 12 20:09:17.365: INFO: Container collectd-exporter ready: true, restart count 0 May 12 20:09:17.365: INFO: Container rbac-proxy ready: true, restart count 0 May 12 20:09:17.365: INFO: prometheus-k8s-0 started at 2021-05-12 16:43:20 +0000 UTC (0+5 container statuses recorded) May 12 20:09:17.365: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 12 20:09:17.365: INFO: Container grafana ready: true, restart count 0 May 12 20:09:17.365: INFO: Container prometheus ready: true, restart count 1 May 12 20:09:17.365: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 12 20:09:17.365: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 12 20:09:17.365: INFO: node-exporter-ddxbd started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 20:09:17.365: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:09:17.365: INFO: Container node-exporter ready: true, restart count 0 May 12 20:09:17.365: INFO: kube-proxy-r9vsx started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.365: INFO: Container kube-proxy ready: true, restart count 1 May 12 20:09:17.365: INFO: kube-multus-ds-amd64-fhzwc started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.365: INFO: Container kube-multus ready: true, restart count 1 May 12 20:09:17.365: INFO: kube-flannel-r7w6z started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 20:09:17.365: INFO: Init container install-cni ready: true, restart count 1 May 12 20:09:17.365: INFO: Container kube-flannel ready: true, restart count 2 May 12 20:09:17.365: INFO: cmk-v4qwz started at 2021-05-12 16:42:07 +0000 UTC (0+2 container statuses recorded) May 12 20:09:17.365: INFO: Container nodereport ready: true, restart count 0 May 12 20:09:17.365: INFO: Container reconcile ready: true, restart count 0 May 12 20:09:17.365: INFO: kubernetes-metrics-scraper-678c97765c-s4sgj started at 2021-05-12 16:33:53 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.365: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 W0512 20:09:17.379832 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 20:09:17.413: INFO: Latency metrics for node node1 May 12 20:09:17.413: INFO: Logging node info for node node2 May 12 20:09:17.417: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 16e4ef1f-0c51-44d5-a257-2eaa348c0d52 78179 0 2021-05-12 16:32:43 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"52:9a:ee:23:4c:ac"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-12 16:32:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-12 16:32:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-12 16:33:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-12 16:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-12 16:41:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-12 19:08:41 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-05-12 19:08:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:35:14 +0000 UTC,LastTransitionTime:2021-05-12 16:35:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 20:09:09 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 20:09:09 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 20:09:09 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 20:09:09 +0000 UTC,LastTransitionTime:2021-05-12 16:35:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:eebaf4858fee4a739009f2f7f2717953,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:15bd8119-98f5-4e13-a2fa-897c2224dd6e,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:96570ae3225d359c9667e2cbd57987388aa9d3be9d5a198ed9b677c7f9f4e450 localhost:30500/barometer-collectd:stable],SizeBytes:1464260582,},ContainerImage{Names:[localhost:30500/cmk@sha256:f178577724ef3b8118ca58277a27d594257de6c55a813b61e421df4008a7f73b localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[nginx@sha256:34ac17eb150f557f6ce94ea34e7c03f3899b717307cad633213a9488a431179f nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:155361826d160d9f566aede3ea35b34f1e0c6422720285b1c22467c2b21a90aa localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392919,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:60f4b5001bb5e7280fddf9143d3ed9bcde4e8016eef54522b5aea6bac9d9774b localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:1935bf73835eb6d4446668a5484bb3724d97d926b26b41c7bf064aa3a5a8bc5f localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 20:09:17.418: INFO: Logging kubelet events for node node2 May 12 20:09:17.419: INFO: Logging pods the kubelet thinks is on node node2 May 12 20:09:17.436: INFO: cmk-init-discover-node2-qrd9v started at 2021-05-12 16:41:44 +0000 UTC (0+3 container statuses recorded) May 12 20:09:17.436: INFO: Container discover ready: false, restart count 0 May 12 20:09:17.436: INFO: Container init ready: false, restart count 0 May 12 20:09:17.436: INFO: Container install ready: false, restart count 0 May 12 20:09:17.436: INFO: kube-proxy-grtqc started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.436: INFO: Container kube-proxy ready: true, restart count 2 May 12 20:09:17.436: INFO: cmk-gbw5d started at 2021-05-12 16:42:08 +0000 UTC (0+2 container statuses recorded) May 12 20:09:17.437: INFO: Container nodereport ready: true, restart count 0 May 12 20:09:17.437: INFO: Container reconcile ready: true, restart count 0 May 12 20:09:17.437: INFO: collectd-tng6x started at 2021-05-12 16:49:38 +0000 UTC (0+3 container statuses recorded) May 12 20:09:17.437: INFO: Container collectd ready: true, restart count 0 May 12 20:09:17.437: INFO: Container collectd-exporter ready: true, restart count 0 May 12 20:09:17.437: INFO: Container rbac-proxy ready: true, restart count 0 May 12 20:09:17.437: INFO: node-exporter-h5rv7 started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 20:09:17.437: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:09:17.437: INFO: Container node-exporter ready: true, restart count 0 May 12 20:09:17.437: INFO: kube-multus-ds-amd64-k28rf started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.437: INFO: Container kube-multus ready: true, restart count 1 May 12 20:09:17.437: INFO: node-feature-discovery-worker-zjvzk started at 2021-05-12 16:38:48 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.437: INFO: Container nfd-worker ready: true, restart count 0 May 12 20:09:17.437: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-j6r4r started at 2021-05-12 16:39:41 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.437: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 20:09:17.437: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-8xc75 started at 2021-05-12 16:46:05 +0000 UTC (0+2 container statuses recorded) May 12 20:09:17.437: INFO: Container tas-controller ready: true, restart count 0 May 12 20:09:17.437: INFO: Container tas-extender ready: true, restart count 0 May 12 20:09:17.437: INFO: nginx-proxy-node2 started at 2021-05-12 16:38:15 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.437: INFO: Container nginx-proxy ready: true, restart count 2 May 12 20:09:17.437: INFO: kube-flannel-rqtcs started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 20:09:17.437: INFO: Init container install-cni ready: true, restart count 2 May 12 20:09:17.437: INFO: Container kube-flannel ready: true, restart count 1 May 12 20:09:17.437: INFO: cmk-webhook-6c9d5f8578-fgcvr started at 2021-05-12 16:42:08 +0000 UTC (0+1 container statuses recorded) May 12 20:09:17.437: INFO: Container cmk-webhook ready: true, restart count 0 W0512 20:09:17.453182 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 20:09:17.494: INFO: Latency metrics for node node2 May 12 20:09:17.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-4326" for this suite. • Failure [602.419 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 May 12 20:09:17.130: Unexpected error: <*errors.errorString | 0xc0002bc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:241 ------------------------------ {"msg":"FAILED [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer =\u003e should not allow an eviction [Serial]","total":4,"completed":0,"skipped":1871,"failed":1,"failures":["[sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer =\u003e should not allow an eviction [Serial]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:312 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 20:09:17.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should not update pod when spec was updated and update strategy is OnDelete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:312 May 12 20:09:17.538: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 12 20:09:17.546: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:17.546: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:17.546: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:17.548: INFO: Number of nodes with available pods: 0 May 12 20:09:17.548: INFO: Node node1 is running more than one daemon pod May 12 20:09:18.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:18.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:18.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:18.556: INFO: Number of nodes with available pods: 0 May 12 20:09:18.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:19.559: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:19.559: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:19.559: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:19.562: INFO: Number of nodes with available pods: 0 May 12 20:09:19.562: INFO: Node node1 is running more than one daemon pod May 12 20:09:20.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:20.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:20.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:20.556: INFO: Number of nodes with available pods: 0 May 12 20:09:20.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:21.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:21.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:21.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:21.556: INFO: Number of nodes with available pods: 0 May 12 20:09:21.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:22.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:22.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:22.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:22.557: INFO: Number of nodes with available pods: 0 May 12 20:09:22.557: INFO: Node node1 is running more than one daemon pod May 12 20:09:23.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:23.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:23.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:23.555: INFO: Number of nodes with available pods: 0 May 12 20:09:23.555: INFO: Node node1 is running more than one daemon pod May 12 20:09:24.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:24.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:24.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:24.557: INFO: Number of nodes with available pods: 0 May 12 20:09:24.557: INFO: Node node1 is running more than one daemon pod May 12 20:09:25.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:25.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:25.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:25.556: INFO: Number of nodes with available pods: 0 May 12 20:09:25.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:26.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:26.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:26.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:26.556: INFO: Number of nodes with available pods: 0 May 12 20:09:26.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:27.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:27.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:27.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:27.559: INFO: Number of nodes with available pods: 0 May 12 20:09:27.559: INFO: Node node1 is running more than one daemon pod May 12 20:09:28.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:28.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:28.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:28.557: INFO: Number of nodes with available pods: 0 May 12 20:09:28.557: INFO: Node node1 is running more than one daemon pod May 12 20:09:29.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:29.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:29.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:29.556: INFO: Number of nodes with available pods: 0 May 12 20:09:29.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:30.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:30.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:30.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:30.556: INFO: Number of nodes with available pods: 0 May 12 20:09:30.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:31.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:31.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:31.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:31.555: INFO: Number of nodes with available pods: 0 May 12 20:09:31.555: INFO: Node node1 is running more than one daemon pod May 12 20:09:32.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:32.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:32.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:32.557: INFO: Number of nodes with available pods: 0 May 12 20:09:32.557: INFO: Node node1 is running more than one daemon pod May 12 20:09:33.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:33.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:33.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:33.557: INFO: Number of nodes with available pods: 0 May 12 20:09:33.557: INFO: Node node1 is running more than one daemon pod May 12 20:09:34.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:34.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:34.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:34.557: INFO: Number of nodes with available pods: 0 May 12 20:09:34.557: INFO: Node node1 is running more than one daemon pod May 12 20:09:35.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:35.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:35.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:35.556: INFO: Number of nodes with available pods: 0 May 12 20:09:35.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:36.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:36.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:36.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:36.556: INFO: Number of nodes with available pods: 0 May 12 20:09:36.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:37.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:37.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:37.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:37.557: INFO: Number of nodes with available pods: 0 May 12 20:09:37.557: INFO: Node node1 is running more than one daemon pod May 12 20:09:38.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:38.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:38.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:38.558: INFO: Number of nodes with available pods: 0 May 12 20:09:38.558: INFO: Node node1 is running more than one daemon pod May 12 20:09:39.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:39.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:39.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:39.556: INFO: Number of nodes with available pods: 0 May 12 20:09:39.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:40.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:40.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:40.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:40.556: INFO: Number of nodes with available pods: 0 May 12 20:09:40.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:41.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:41.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:41.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:41.556: INFO: Number of nodes with available pods: 0 May 12 20:09:41.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:42.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:42.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:42.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:42.555: INFO: Number of nodes with available pods: 0 May 12 20:09:42.555: INFO: Node node1 is running more than one daemon pod May 12 20:09:43.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:43.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:43.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:43.556: INFO: Number of nodes with available pods: 0 May 12 20:09:43.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:44.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:44.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:44.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:44.555: INFO: Number of nodes with available pods: 0 May 12 20:09:44.555: INFO: Node node1 is running more than one daemon pod May 12 20:09:45.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:45.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:45.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:45.556: INFO: Number of nodes with available pods: 0 May 12 20:09:45.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:46.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:46.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:46.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:46.556: INFO: Number of nodes with available pods: 0 May 12 20:09:46.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:47.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:47.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:47.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:47.556: INFO: Number of nodes with available pods: 0 May 12 20:09:47.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:48.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:48.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:48.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:48.556: INFO: Number of nodes with available pods: 0 May 12 20:09:48.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:49.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:49.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:49.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:49.558: INFO: Number of nodes with available pods: 0 May 12 20:09:49.558: INFO: Node node1 is running more than one daemon pod May 12 20:09:50.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:50.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:50.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:50.555: INFO: Number of nodes with available pods: 0 May 12 20:09:50.555: INFO: Node node1 is running more than one daemon pod May 12 20:09:51.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:51.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:51.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:51.555: INFO: Number of nodes with available pods: 0 May 12 20:09:51.555: INFO: Node node1 is running more than one daemon pod May 12 20:09:52.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:52.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:52.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:52.556: INFO: Number of nodes with available pods: 0 May 12 20:09:52.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:53.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:53.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:53.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:53.556: INFO: Number of nodes with available pods: 0 May 12 20:09:53.556: INFO: Node node1 is running more than one daemon pod May 12 20:09:54.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:54.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:54.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:54.555: INFO: Number of nodes with available pods: 0 May 12 20:09:54.555: INFO: Node node1 is running more than one daemon pod May 12 20:09:55.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:55.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:55.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:55.558: INFO: Number of nodes with available pods: 0 May 12 20:09:55.558: INFO: Node node1 is running more than one daemon pod May 12 20:09:56.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:56.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:56.557: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:56.559: INFO: Number of nodes with available pods: 0 May 12 20:09:56.559: INFO: Node node1 is running more than one daemon pod May 12 20:09:57.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:57.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:57.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:57.557: INFO: Number of nodes with available pods: 0 May 12 20:09:57.557: INFO: Node node1 is running more than one daemon pod May 12 20:09:58.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:58.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:58.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:58.558: INFO: Number of nodes with available pods: 0 May 12 20:09:58.558: INFO: Node node1 is running more than one daemon pod May 12 20:09:59.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:59.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:59.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:09:59.556: INFO: Number of nodes with available pods: 0 May 12 20:09:59.556: INFO: Node node1 is running more than one daemon pod May 12 20:10:00.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:00.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:00.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:00.555: INFO: Number of nodes with available pods: 0 May 12 20:10:00.555: INFO: Node node1 is running more than one daemon pod May 12 20:10:01.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:01.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:01.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:01.556: INFO: Number of nodes with available pods: 0 May 12 20:10:01.556: INFO: Node node1 is running more than one daemon pod May 12 20:10:02.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:02.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:02.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:02.556: INFO: Number of nodes with available pods: 0 May 12 20:10:02.556: INFO: Node node1 is running more than one daemon pod May 12 20:10:03.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:03.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:03.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:03.556: INFO: Number of nodes with available pods: 0 May 12 20:10:03.556: INFO: Node node1 is running more than one daemon pod May 12 20:10:04.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:04.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:04.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:04.557: INFO: Number of nodes with available pods: 0 May 12 20:10:04.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:05.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:05.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:05.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:05.557: INFO: Number of nodes with available pods: 0 May 12 20:10:05.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:06.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:06.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:06.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:06.557: INFO: Number of nodes with available pods: 0 May 12 20:10:06.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:07.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:07.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:07.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:07.558: INFO: Number of nodes with available pods: 0 May 12 20:10:07.558: INFO: Node node1 is running more than one daemon pod May 12 20:10:08.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:08.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:08.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:08.558: INFO: Number of nodes with available pods: 0 May 12 20:10:08.558: INFO: Node node1 is running more than one daemon pod May 12 20:10:09.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:09.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:09.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:09.556: INFO: Number of nodes with available pods: 0 May 12 20:10:09.556: INFO: Node node1 is running more than one daemon pod May 12 20:10:10.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:10.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:10.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:10.557: INFO: Number of nodes with available pods: 0 May 12 20:10:10.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:11.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:11.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:11.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:11.558: INFO: Number of nodes with available pods: 0 May 12 20:10:11.558: INFO: Node node1 is running more than one daemon pod May 12 20:10:12.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:12.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:12.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:12.557: INFO: Number of nodes with available pods: 0 May 12 20:10:12.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:13.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:13.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:13.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:13.557: INFO: Number of nodes with available pods: 0 May 12 20:10:13.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:14.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:14.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:14.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:14.559: INFO: Number of nodes with available pods: 0 May 12 20:10:14.559: INFO: Node node1 is running more than one daemon pod May 12 20:10:15.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:15.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:15.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:15.556: INFO: Number of nodes with available pods: 0 May 12 20:10:15.556: INFO: Node node1 is running more than one daemon pod May 12 20:10:16.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:16.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:16.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:16.558: INFO: Number of nodes with available pods: 0 May 12 20:10:16.558: INFO: Node node1 is running more than one daemon pod May 12 20:10:17.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:17.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:17.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:17.557: INFO: Number of nodes with available pods: 0 May 12 20:10:17.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:18.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:18.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:18.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:18.559: INFO: Number of nodes with available pods: 0 May 12 20:10:18.559: INFO: Node node1 is running more than one daemon pod May 12 20:10:19.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:19.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:19.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:19.557: INFO: Number of nodes with available pods: 0 May 12 20:10:19.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:20.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:20.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:20.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:20.557: INFO: Number of nodes with available pods: 0 May 12 20:10:20.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:21.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:21.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:21.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:21.556: INFO: Number of nodes with available pods: 0 May 12 20:10:21.556: INFO: Node node1 is running more than one daemon pod May 12 20:10:22.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:22.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:22.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:22.557: INFO: Number of nodes with available pods: 0 May 12 20:10:22.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:23.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:23.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:23.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:23.559: INFO: Number of nodes with available pods: 0 May 12 20:10:23.559: INFO: Node node1 is running more than one daemon pod May 12 20:10:24.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:24.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:24.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:24.555: INFO: Number of nodes with available pods: 0 May 12 20:10:24.555: INFO: Node node1 is running more than one daemon pod May 12 20:10:25.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:25.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:25.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:25.555: INFO: Number of nodes with available pods: 0 May 12 20:10:25.555: INFO: Node node1 is running more than one daemon pod May 12 20:10:26.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:26.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:26.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:26.558: INFO: Number of nodes with available pods: 0 May 12 20:10:26.558: INFO: Node node1 is running more than one daemon pod May 12 20:10:27.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:27.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:27.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:27.556: INFO: Number of nodes with available pods: 0 May 12 20:10:27.556: INFO: Node node1 is running more than one daemon pod May 12 20:10:28.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:28.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:28.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:28.558: INFO: Number of nodes with available pods: 0 May 12 20:10:28.558: INFO: Node node1 is running more than one daemon pod May 12 20:10:29.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:29.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:29.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:29.558: INFO: Number of nodes with available pods: 0 May 12 20:10:29.558: INFO: Node node1 is running more than one daemon pod May 12 20:10:30.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:30.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:30.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:30.555: INFO: Number of nodes with available pods: 0 May 12 20:10:30.555: INFO: Node node1 is running more than one daemon pod May 12 20:10:31.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:31.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:31.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:31.559: INFO: Number of nodes with available pods: 0 May 12 20:10:31.559: INFO: Node node1 is running more than one daemon pod May 12 20:10:32.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:32.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:32.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:32.559: INFO: Number of nodes with available pods: 0 May 12 20:10:32.559: INFO: Node node1 is running more than one daemon pod May 12 20:10:33.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:33.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:33.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:33.555: INFO: Number of nodes with available pods: 0 May 12 20:10:33.555: INFO: Node node1 is running more than one daemon pod May 12 20:10:34.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:34.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:34.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:34.559: INFO: Number of nodes with available pods: 0 May 12 20:10:34.559: INFO: Node node1 is running more than one daemon pod May 12 20:10:35.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:35.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:35.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:35.556: INFO: Number of nodes with available pods: 0 May 12 20:10:35.556: INFO: Node node1 is running more than one daemon pod May 12 20:10:36.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:36.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:36.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:36.556: INFO: Number of nodes with available pods: 0 May 12 20:10:36.556: INFO: Node node1 is running more than one daemon pod May 12 20:10:37.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:37.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:37.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:37.558: INFO: Number of nodes with available pods: 0 May 12 20:10:37.558: INFO: Node node1 is running more than one daemon pod May 12 20:10:38.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:38.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:38.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:38.559: INFO: Number of nodes with available pods: 0 May 12 20:10:38.559: INFO: Node node1 is running more than one daemon pod May 12 20:10:39.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:39.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:39.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:39.557: INFO: Number of nodes with available pods: 0 May 12 20:10:39.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:40.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:40.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:40.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:40.559: INFO: Number of nodes with available pods: 0 May 12 20:10:40.559: INFO: Node node1 is running more than one daemon pod May 12 20:10:41.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:41.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:41.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:41.554: INFO: Number of nodes with available pods: 0 May 12 20:10:41.554: INFO: Node node1 is running more than one daemon pod May 12 20:10:42.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:42.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:42.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:42.557: INFO: Number of nodes with available pods: 0 May 12 20:10:42.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:43.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:43.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:43.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:43.559: INFO: Number of nodes with available pods: 0 May 12 20:10:43.559: INFO: Node node1 is running more than one daemon pod May 12 20:10:44.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:44.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:44.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:44.557: INFO: Number of nodes with available pods: 0 May 12 20:10:44.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:45.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:45.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:45.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:45.556: INFO: Number of nodes with available pods: 0 May 12 20:10:45.556: INFO: Node node1 is running more than one daemon pod May 12 20:10:46.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:46.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:46.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:46.558: INFO: Number of nodes with available pods: 0 May 12 20:10:46.558: INFO: Node node1 is running more than one daemon pod May 12 20:10:47.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:47.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:47.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:47.557: INFO: Number of nodes with available pods: 0 May 12 20:10:47.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:48.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:48.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:48.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:48.557: INFO: Number of nodes with available pods: 0 May 12 20:10:48.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:49.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:49.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:49.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:49.556: INFO: Number of nodes with available pods: 0 May 12 20:10:49.556: INFO: Node node1 is running more than one daemon pod May 12 20:10:50.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:50.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:50.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:50.556: INFO: Number of nodes with available pods: 0 May 12 20:10:50.556: INFO: Node node1 is running more than one daemon pod May 12 20:10:51.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:51.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:51.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:51.558: INFO: Number of nodes with available pods: 0 May 12 20:10:51.558: INFO: Node node1 is running more than one daemon pod May 12 20:10:52.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:52.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:52.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:52.557: INFO: Number of nodes with available pods: 0 May 12 20:10:52.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:53.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:53.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:53.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:53.558: INFO: Number of nodes with available pods: 0 May 12 20:10:53.558: INFO: Node node1 is running more than one daemon pod May 12 20:10:54.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:54.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:54.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:54.556: INFO: Number of nodes with available pods: 0 May 12 20:10:54.556: INFO: Node node1 is running more than one daemon pod May 12 20:10:55.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:55.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:55.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:55.556: INFO: Number of nodes with available pods: 0 May 12 20:10:55.556: INFO: Node node1 is running more than one daemon pod May 12 20:10:56.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:56.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:56.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:56.557: INFO: Number of nodes with available pods: 0 May 12 20:10:56.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:57.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:57.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:57.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:57.557: INFO: Number of nodes with available pods: 0 May 12 20:10:57.557: INFO: Node node1 is running more than one daemon pod May 12 20:10:58.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:58.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:58.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:58.560: INFO: Number of nodes with available pods: 0 May 12 20:10:58.560: INFO: Node node1 is running more than one daemon pod May 12 20:10:59.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:59.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:59.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:10:59.556: INFO: Number of nodes with available pods: 0 May 12 20:10:59.556: INFO: Node node1 is running more than one daemon pod May 12 20:11:00.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:00.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:00.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:00.557: INFO: Number of nodes with available pods: 0 May 12 20:11:00.557: INFO: Node node1 is running more than one daemon pod May 12 20:11:01.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:01.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:01.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:01.558: INFO: Number of nodes with available pods: 0 May 12 20:11:01.558: INFO: Node node1 is running more than one daemon pod May 12 20:11:02.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:02.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:02.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:02.557: INFO: Number of nodes with available pods: 0 May 12 20:11:02.557: INFO: Node node1 is running more than one daemon pod May 12 20:11:03.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:03.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:03.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:03.558: INFO: Number of nodes with available pods: 0 May 12 20:11:03.558: INFO: Node node1 is running more than one daemon pod May 12 20:11:04.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:04.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:04.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:04.557: INFO: Number of nodes with available pods: 0 May 12 20:11:04.557: INFO: Node node1 is running more than one daemon pod May 12 20:11:05.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:05.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:05.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:05.556: INFO: Number of nodes with available pods: 0 May 12 20:11:05.556: INFO: Node node1 is running more than one daemon pod May 12 20:11:06.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:06.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:06.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:06.558: INFO: Number of nodes with available pods: 0 May 12 20:11:06.558: INFO: Node node1 is running more than one daemon pod May 12 20:11:07.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:07.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:07.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:07.557: INFO: Number of nodes with available pods: 0 May 12 20:11:07.557: INFO: Node node1 is running more than one daemon pod May 12 20:11:08.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:08.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:08.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:08.559: INFO: Number of nodes with available pods: 0 May 12 20:11:08.559: INFO: Node node1 is running more than one daemon pod May 12 20:11:09.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:09.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:09.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:09.557: INFO: Number of nodes with available pods: 0 May 12 20:11:09.557: INFO: Node node1 is running more than one daemon pod May 12 20:11:10.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:10.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:10.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:10.555: INFO: Number of nodes with available pods: 0 May 12 20:11:10.555: INFO: Node node1 is running more than one daemon pod May 12 20:11:11.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:11.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:11.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:11.558: INFO: Number of nodes with available pods: 0 May 12 20:11:11.558: INFO: Node node1 is running more than one daemon pod May 12 20:11:12.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:12.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:12.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:12.558: INFO: Number of nodes with available pods: 0 May 12 20:11:12.558: INFO: Node node1 is running more than one daemon pod May 12 20:11:13.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:13.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:13.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:13.556: INFO: Number of nodes with available pods: 0 May 12 20:11:13.556: INFO: Node node1 is running more than one daemon pod May 12 20:11:14.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:14.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:14.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:14.557: INFO: Number of nodes with available pods: 0 May 12 20:11:14.557: INFO: Node node1 is running more than one daemon pod May 12 20:11:15.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:15.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:15.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:15.556: INFO: Number of nodes with available pods: 0 May 12 20:11:15.556: INFO: Node node1 is running more than one daemon pod May 12 20:11:16.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:16.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:16.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:16.558: INFO: Number of nodes with available pods: 0 May 12 20:11:16.558: INFO: Node node1 is running more than one daemon pod May 12 20:11:17.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:17.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:17.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:17.557: INFO: Number of nodes with available pods: 0 May 12 20:11:17.557: INFO: Node node1 is running more than one daemon pod May 12 20:11:18.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:18.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:18.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:18.557: INFO: Number of nodes with available pods: 0 May 12 20:11:18.557: INFO: Node node1 is running more than one daemon pod May 12 20:11:19.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:19.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:19.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:19.557: INFO: Number of nodes with available pods: 0 May 12 20:11:19.557: INFO: Node node1 is running more than one daemon pod May 12 20:11:20.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:20.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:20.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:20.558: INFO: Number of nodes with available pods: 0 May 12 20:11:20.558: INFO: Node node1 is running more than one daemon pod May 12 20:11:21.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:21.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:21.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:21.556: INFO: Number of nodes with available pods: 0 May 12 20:11:21.556: INFO: Node node1 is running more than one daemon pod May 12 20:11:22.557: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:22.557: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:22.557: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:22.559: INFO: Number of nodes with available pods: 0 May 12 20:11:22.559: INFO: Node node1 is running more than one daemon pod May 12 20:11:23.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:23.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:23.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:23.560: INFO: Number of nodes with available pods: 0 May 12 20:11:23.560: INFO: Node node1 is running more than one daemon pod May 12 20:11:24.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:24.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:24.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:24.557: INFO: Number of nodes with available pods: 0 May 12 20:11:24.557: INFO: Node node1 is running more than one daemon pod May 12 20:11:25.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:25.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:25.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:25.556: INFO: Number of nodes with available pods: 0 May 12 20:11:25.557: INFO: Node node1 is running more than one daemon pod May 12 20:11:26.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:26.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:26.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:26.555: INFO: Number of nodes with available pods: 0 May 12 20:11:26.555: INFO: Node node1 is running more than one daemon pod May 12 20:11:27.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:27.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:27.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:27.557: INFO: Number of nodes with available pods: 0 May 12 20:11:27.557: INFO: Node node1 is running more than one daemon pod May 12 20:11:28.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:28.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:28.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:28.556: INFO: Number of nodes with available pods: 0 May 12 20:11:28.556: INFO: Node node1 is running more than one daemon pod May 12 20:11:29.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:29.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:29.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:29.555: INFO: Number of nodes with available pods: 0 May 12 20:11:29.555: INFO: Node node1 is running more than one daemon pod May 12 20:11:30.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:30.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:30.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:30.558: INFO: Number of nodes with available pods: 0 May 12 20:11:30.558: INFO: Node node1 is running more than one daemon pod May 12 20:11:31.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:31.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:31.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:31.555: INFO: Number of nodes with available pods: 0 May 12 20:11:31.555: INFO: Node node1 is running more than one daemon pod May 12 20:11:32.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:32.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:32.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:32.558: INFO: Number of nodes with available pods: 0 May 12 20:11:32.558: INFO: Node node1 is running more than one daemon pod May 12 20:11:33.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:33.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:33.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:33.557: INFO: Number of nodes with available pods: 0 May 12 20:11:33.557: INFO: Node node1 is running more than one daemon pod May 12 20:11:34.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:34.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:34.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:34.558: INFO: Number of nodes with available pods: 0 May 12 20:11:34.558: INFO: Node node1 is running more than one daemon pod May 12 20:11:35.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:35.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:35.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:35.557: INFO: Number of nodes with available pods: 0 May 12 20:11:35.557: INFO: Node node1 is running more than one daemon pod May 12 20:11:36.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:36.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:36.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:36.559: INFO: Number of nodes with available pods: 0 May 12 20:11:36.560: INFO: Node node1 is running more than one daemon pod May 12 20:11:37.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:37.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:37.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:37.556: INFO: Number of nodes with available pods: 0 May 12 20:11:37.556: INFO: Node node1 is running more than one daemon pod May 12 20:11:38.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:38.557: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:38.557: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:38.559: INFO: Number of nodes with available pods: 0 May 12 20:11:38.559: INFO: Node node1 is running more than one daemon pod May 12 20:11:39.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:39.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:39.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:39.559: INFO: Number of nodes with available pods: 0 May 12 20:11:39.559: INFO: Node node1 is running more than one daemon pod May 12 20:11:40.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:40.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:40.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:40.555: INFO: Number of nodes with available pods: 0 May 12 20:11:40.555: INFO: Node node1 is running more than one daemon pod May 12 20:11:41.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:41.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:41.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:41.558: INFO: Number of nodes with available pods: 0 May 12 20:11:41.558: INFO: Node node1 is running more than one daemon pod May 12 20:11:42.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:42.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:42.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:42.560: INFO: Number of nodes with available pods: 0 May 12 20:11:42.560: INFO: Node node1 is running more than one daemon pod May 12 20:11:43.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:43.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:43.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:43.557: INFO: Number of nodes with available pods: 0 May 12 20:11:43.557: INFO: Node node1 is running more than one daemon pod May 12 20:11:44.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:44.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:44.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:44.558: INFO: Number of nodes with available pods: 0 May 12 20:11:44.558: INFO: Node node1 is running more than one daemon pod May 12 20:11:45.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:45.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:45.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:45.556: INFO: Number of nodes with available pods: 0 May 12 20:11:45.556: INFO: Node node1 is running more than one daemon pod May 12 20:11:46.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:46.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:46.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:46.559: INFO: Number of nodes with available pods: 0 May 12 20:11:46.559: INFO: Node node1 is running more than one daemon pod May 12 20:11:47.557: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:47.557: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:47.557: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:47.560: INFO: Number of nodes with available pods: 0 May 12 20:11:47.560: INFO: Node node1 is running more than one daemon pod May 12 20:11:48.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:48.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:48.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:48.558: INFO: Number of nodes with available pods: 0 May 12 20:11:48.559: INFO: Node node1 is running more than one daemon pod May 12 20:11:49.557: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:49.557: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:49.557: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:49.559: INFO: Number of nodes with available pods: 0 May 12 20:11:49.559: INFO: Node node1 is running more than one daemon pod May 12 20:11:50.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:50.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:50.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:50.558: INFO: Number of nodes with available pods: 0 May 12 20:11:50.559: INFO: Node node1 is running more than one daemon pod May 12 20:11:51.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:51.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:51.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:51.557: INFO: Number of nodes with available pods: 0 May 12 20:11:51.557: INFO: Node node1 is running more than one daemon pod May 12 20:11:52.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:52.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:52.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:52.558: INFO: Number of nodes with available pods: 0 May 12 20:11:52.558: INFO: Node node1 is running more than one daemon pod May 12 20:11:53.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:53.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:53.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:53.558: INFO: Number of nodes with available pods: 0 May 12 20:11:53.558: INFO: Node node1 is running more than one daemon pod May 12 20:11:54.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:54.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:54.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:54.556: INFO: Number of nodes with available pods: 0 May 12 20:11:54.556: INFO: Node node1 is running more than one daemon pod May 12 20:11:55.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:55.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:55.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:55.558: INFO: Number of nodes with available pods: 0 May 12 20:11:55.558: INFO: Node node1 is running more than one daemon pod May 12 20:11:56.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:56.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:56.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:56.555: INFO: Number of nodes with available pods: 0 May 12 20:11:56.555: INFO: Node node1 is running more than one daemon pod May 12 20:11:57.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:57.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:57.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:57.556: INFO: Number of nodes with available pods: 0 May 12 20:11:57.556: INFO: Node node1 is running more than one daemon pod May 12 20:11:58.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:58.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:58.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:58.558: INFO: Number of nodes with available pods: 0 May 12 20:11:58.558: INFO: Node node1 is running more than one daemon pod May 12 20:11:59.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:59.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:59.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:11:59.556: INFO: Number of nodes with available pods: 0 May 12 20:11:59.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:00.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:00.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:00.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:00.555: INFO: Number of nodes with available pods: 0 May 12 20:12:00.555: INFO: Node node1 is running more than one daemon pod May 12 20:12:01.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:01.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:01.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:01.556: INFO: Number of nodes with available pods: 0 May 12 20:12:01.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:02.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:02.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:02.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:02.557: INFO: Number of nodes with available pods: 0 May 12 20:12:02.557: INFO: Node node1 is running more than one daemon pod May 12 20:12:03.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:03.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:03.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:03.558: INFO: Number of nodes with available pods: 0 May 12 20:12:03.558: INFO: Node node1 is running more than one daemon pod May 12 20:12:04.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:04.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:04.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:04.556: INFO: Number of nodes with available pods: 0 May 12 20:12:04.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:05.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:05.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:05.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:05.558: INFO: Number of nodes with available pods: 0 May 12 20:12:05.558: INFO: Node node1 is running more than one daemon pod May 12 20:12:06.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:06.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:06.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:06.557: INFO: Number of nodes with available pods: 0 May 12 20:12:06.557: INFO: Node node1 is running more than one daemon pod May 12 20:12:07.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:07.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:07.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:07.558: INFO: Number of nodes with available pods: 0 May 12 20:12:07.558: INFO: Node node1 is running more than one daemon pod May 12 20:12:08.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:08.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:08.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:08.559: INFO: Number of nodes with available pods: 0 May 12 20:12:08.559: INFO: Node node1 is running more than one daemon pod May 12 20:12:09.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:09.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:09.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:09.557: INFO: Number of nodes with available pods: 0 May 12 20:12:09.557: INFO: Node node1 is running more than one daemon pod May 12 20:12:10.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:10.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:10.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:10.555: INFO: Number of nodes with available pods: 0 May 12 20:12:10.555: INFO: Node node1 is running more than one daemon pod May 12 20:12:11.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:11.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:11.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:11.558: INFO: Number of nodes with available pods: 0 May 12 20:12:11.558: INFO: Node node1 is running more than one daemon pod May 12 20:12:12.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:12.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:12.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:12.558: INFO: Number of nodes with available pods: 0 May 12 20:12:12.558: INFO: Node node1 is running more than one daemon pod May 12 20:12:13.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:13.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:13.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:13.556: INFO: Number of nodes with available pods: 0 May 12 20:12:13.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:14.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:14.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:14.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:14.557: INFO: Number of nodes with available pods: 0 May 12 20:12:14.557: INFO: Node node1 is running more than one daemon pod May 12 20:12:15.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:15.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:15.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:15.555: INFO: Number of nodes with available pods: 0 May 12 20:12:15.555: INFO: Node node1 is running more than one daemon pod May 12 20:12:16.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:16.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:16.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:16.558: INFO: Number of nodes with available pods: 0 May 12 20:12:16.558: INFO: Node node1 is running more than one daemon pod May 12 20:12:17.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:17.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:17.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:17.556: INFO: Number of nodes with available pods: 0 May 12 20:12:17.557: INFO: Node node1 is running more than one daemon pod May 12 20:12:18.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:18.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:18.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:18.557: INFO: Number of nodes with available pods: 0 May 12 20:12:18.557: INFO: Node node1 is running more than one daemon pod May 12 20:12:19.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:19.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:19.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:19.557: INFO: Number of nodes with available pods: 0 May 12 20:12:19.558: INFO: Node node1 is running more than one daemon pod May 12 20:12:20.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:20.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:20.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:20.556: INFO: Number of nodes with available pods: 0 May 12 20:12:20.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:21.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:21.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:21.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:21.556: INFO: Number of nodes with available pods: 0 May 12 20:12:21.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:22.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:22.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:22.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:22.557: INFO: Number of nodes with available pods: 0 May 12 20:12:22.557: INFO: Node node1 is running more than one daemon pod May 12 20:12:23.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:23.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:23.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:23.556: INFO: Number of nodes with available pods: 0 May 12 20:12:23.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:24.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:24.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:24.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:24.556: INFO: Number of nodes with available pods: 0 May 12 20:12:24.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:25.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:25.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:25.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:25.556: INFO: Number of nodes with available pods: 0 May 12 20:12:25.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:26.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:26.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:26.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:26.555: INFO: Number of nodes with available pods: 0 May 12 20:12:26.555: INFO: Node node1 is running more than one daemon pod May 12 20:12:27.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:27.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:27.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:27.558: INFO: Number of nodes with available pods: 0 May 12 20:12:27.558: INFO: Node node1 is running more than one daemon pod May 12 20:12:28.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:28.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:28.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:28.559: INFO: Number of nodes with available pods: 0 May 12 20:12:28.559: INFO: Node node1 is running more than one daemon pod May 12 20:12:29.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:29.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:29.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:29.555: INFO: Number of nodes with available pods: 0 May 12 20:12:29.555: INFO: Node node1 is running more than one daemon pod May 12 20:12:30.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:30.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:30.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:30.556: INFO: Number of nodes with available pods: 0 May 12 20:12:30.557: INFO: Node node1 is running more than one daemon pod May 12 20:12:31.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:31.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:31.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:31.557: INFO: Number of nodes with available pods: 0 May 12 20:12:31.557: INFO: Node node1 is running more than one daemon pod May 12 20:12:32.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:32.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:32.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:32.556: INFO: Number of nodes with available pods: 0 May 12 20:12:32.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:33.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:33.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:33.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:33.559: INFO: Number of nodes with available pods: 0 May 12 20:12:33.559: INFO: Node node1 is running more than one daemon pod May 12 20:12:34.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:34.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:34.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:34.556: INFO: Number of nodes with available pods: 0 May 12 20:12:34.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:35.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:35.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:35.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:35.556: INFO: Number of nodes with available pods: 0 May 12 20:12:35.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:36.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:36.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:36.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:36.559: INFO: Number of nodes with available pods: 0 May 12 20:12:36.559: INFO: Node node1 is running more than one daemon pod May 12 20:12:37.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:37.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:37.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:37.555: INFO: Number of nodes with available pods: 0 May 12 20:12:37.555: INFO: Node node1 is running more than one daemon pod May 12 20:12:38.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:38.557: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:38.557: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:38.559: INFO: Number of nodes with available pods: 0 May 12 20:12:38.559: INFO: Node node1 is running more than one daemon pod May 12 20:12:39.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:39.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:39.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:39.557: INFO: Number of nodes with available pods: 0 May 12 20:12:39.557: INFO: Node node1 is running more than one daemon pod May 12 20:12:40.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:40.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:40.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:40.556: INFO: Number of nodes with available pods: 0 May 12 20:12:40.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:41.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:41.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:41.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:41.555: INFO: Number of nodes with available pods: 0 May 12 20:12:41.555: INFO: Node node1 is running more than one daemon pod May 12 20:12:42.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:42.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:42.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:42.557: INFO: Number of nodes with available pods: 0 May 12 20:12:42.557: INFO: Node node1 is running more than one daemon pod May 12 20:12:43.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:43.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:43.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:43.556: INFO: Number of nodes with available pods: 0 May 12 20:12:43.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:44.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:44.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:44.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:44.557: INFO: Number of nodes with available pods: 0 May 12 20:12:44.557: INFO: Node node1 is running more than one daemon pod May 12 20:12:45.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:45.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:45.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:45.556: INFO: Number of nodes with available pods: 0 May 12 20:12:45.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:46.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:46.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:46.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:46.557: INFO: Number of nodes with available pods: 0 May 12 20:12:46.557: INFO: Node node1 is running more than one daemon pod May 12 20:12:47.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:47.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:47.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:47.556: INFO: Number of nodes with available pods: 0 May 12 20:12:47.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:48.557: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:48.557: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:48.557: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:48.562: INFO: Number of nodes with available pods: 0 May 12 20:12:48.562: INFO: Node node1 is running more than one daemon pod May 12 20:12:49.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:49.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:49.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:49.556: INFO: Number of nodes with available pods: 0 May 12 20:12:49.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:50.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:50.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:50.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:50.554: INFO: Number of nodes with available pods: 0 May 12 20:12:50.554: INFO: Node node1 is running more than one daemon pod May 12 20:12:51.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:51.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:51.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:51.559: INFO: Number of nodes with available pods: 0 May 12 20:12:51.559: INFO: Node node1 is running more than one daemon pod May 12 20:12:52.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:52.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:52.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:52.560: INFO: Number of nodes with available pods: 0 May 12 20:12:52.560: INFO: Node node1 is running more than one daemon pod May 12 20:12:53.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:53.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:53.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:53.556: INFO: Number of nodes with available pods: 0 May 12 20:12:53.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:54.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:54.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:54.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:54.557: INFO: Number of nodes with available pods: 0 May 12 20:12:54.557: INFO: Node node1 is running more than one daemon pod May 12 20:12:55.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:55.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:55.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:55.556: INFO: Number of nodes with available pods: 0 May 12 20:12:55.556: INFO: Node node1 is running more than one daemon pod May 12 20:12:56.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:56.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:56.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:56.559: INFO: Number of nodes with available pods: 0 May 12 20:12:56.559: INFO: Node node1 is running more than one daemon pod May 12 20:12:57.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:57.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:57.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:57.557: INFO: Number of nodes with available pods: 0 May 12 20:12:57.557: INFO: Node node1 is running more than one daemon pod May 12 20:12:58.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:58.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:58.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:58.558: INFO: Number of nodes with available pods: 0 May 12 20:12:58.558: INFO: Node node1 is running more than one daemon pod May 12 20:12:59.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:59.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:59.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:12:59.556: INFO: Number of nodes with available pods: 0 May 12 20:12:59.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:00.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:00.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:00.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:00.556: INFO: Number of nodes with available pods: 0 May 12 20:13:00.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:01.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:01.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:01.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:01.556: INFO: Number of nodes with available pods: 0 May 12 20:13:01.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:02.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:02.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:02.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:02.556: INFO: Number of nodes with available pods: 0 May 12 20:13:02.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:03.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:03.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:03.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:03.556: INFO: Number of nodes with available pods: 0 May 12 20:13:03.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:04.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:04.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:04.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:04.562: INFO: Number of nodes with available pods: 0 May 12 20:13:04.562: INFO: Node node1 is running more than one daemon pod May 12 20:13:05.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:05.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:05.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:05.556: INFO: Number of nodes with available pods: 0 May 12 20:13:05.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:06.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:06.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:06.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:06.556: INFO: Number of nodes with available pods: 0 May 12 20:13:06.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:07.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:07.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:07.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:07.557: INFO: Number of nodes with available pods: 0 May 12 20:13:07.557: INFO: Node node1 is running more than one daemon pod May 12 20:13:08.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:08.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:08.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:08.559: INFO: Number of nodes with available pods: 0 May 12 20:13:08.559: INFO: Node node1 is running more than one daemon pod May 12 20:13:09.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:09.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:09.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:09.555: INFO: Number of nodes with available pods: 0 May 12 20:13:09.555: INFO: Node node1 is running more than one daemon pod May 12 20:13:10.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:10.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:10.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:10.557: INFO: Number of nodes with available pods: 0 May 12 20:13:10.557: INFO: Node node1 is running more than one daemon pod May 12 20:13:11.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:11.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:11.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:11.556: INFO: Number of nodes with available pods: 0 May 12 20:13:11.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:12.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:12.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:12.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:12.559: INFO: Number of nodes with available pods: 0 May 12 20:13:12.559: INFO: Node node1 is running more than one daemon pod May 12 20:13:13.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:13.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:13.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:13.557: INFO: Number of nodes with available pods: 0 May 12 20:13:13.557: INFO: Node node1 is running more than one daemon pod May 12 20:13:14.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:14.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:14.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:14.557: INFO: Number of nodes with available pods: 0 May 12 20:13:14.557: INFO: Node node1 is running more than one daemon pod May 12 20:13:15.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:15.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:15.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:15.556: INFO: Number of nodes with available pods: 0 May 12 20:13:15.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:16.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:16.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:16.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:16.556: INFO: Number of nodes with available pods: 0 May 12 20:13:16.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:17.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:17.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:17.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:17.556: INFO: Number of nodes with available pods: 0 May 12 20:13:17.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:18.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:18.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:18.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:18.559: INFO: Number of nodes with available pods: 0 May 12 20:13:18.559: INFO: Node node1 is running more than one daemon pod May 12 20:13:19.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:19.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:19.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:19.556: INFO: Number of nodes with available pods: 0 May 12 20:13:19.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:20.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:20.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:20.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:20.555: INFO: Number of nodes with available pods: 0 May 12 20:13:20.555: INFO: Node node1 is running more than one daemon pod May 12 20:13:21.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:21.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:21.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:21.557: INFO: Number of nodes with available pods: 0 May 12 20:13:21.557: INFO: Node node1 is running more than one daemon pod May 12 20:13:22.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:22.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:22.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:22.558: INFO: Number of nodes with available pods: 0 May 12 20:13:22.558: INFO: Node node1 is running more than one daemon pod May 12 20:13:23.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:23.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:23.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:23.557: INFO: Number of nodes with available pods: 0 May 12 20:13:23.558: INFO: Node node1 is running more than one daemon pod May 12 20:13:24.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:24.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:24.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:24.558: INFO: Number of nodes with available pods: 0 May 12 20:13:24.558: INFO: Node node1 is running more than one daemon pod May 12 20:13:25.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:25.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:25.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:25.555: INFO: Number of nodes with available pods: 0 May 12 20:13:25.555: INFO: Node node1 is running more than one daemon pod May 12 20:13:26.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:26.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:26.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:26.558: INFO: Number of nodes with available pods: 0 May 12 20:13:26.558: INFO: Node node1 is running more than one daemon pod May 12 20:13:27.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:27.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:27.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:27.560: INFO: Number of nodes with available pods: 0 May 12 20:13:27.560: INFO: Node node1 is running more than one daemon pod May 12 20:13:28.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:28.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:28.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:28.557: INFO: Number of nodes with available pods: 0 May 12 20:13:28.557: INFO: Node node1 is running more than one daemon pod May 12 20:13:29.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:29.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:29.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:29.556: INFO: Number of nodes with available pods: 0 May 12 20:13:29.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:30.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:30.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:30.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:30.556: INFO: Number of nodes with available pods: 0 May 12 20:13:30.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:31.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:31.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:31.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:31.555: INFO: Number of nodes with available pods: 0 May 12 20:13:31.555: INFO: Node node1 is running more than one daemon pod May 12 20:13:32.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:32.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:32.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:32.557: INFO: Number of nodes with available pods: 0 May 12 20:13:32.557: INFO: Node node1 is running more than one daemon pod May 12 20:13:33.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:33.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:33.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:33.558: INFO: Number of nodes with available pods: 0 May 12 20:13:33.558: INFO: Node node1 is running more than one daemon pod May 12 20:13:34.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:34.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:34.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:34.555: INFO: Number of nodes with available pods: 0 May 12 20:13:34.555: INFO: Node node1 is running more than one daemon pod May 12 20:13:35.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:35.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:35.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:35.557: INFO: Number of nodes with available pods: 0 May 12 20:13:35.557: INFO: Node node1 is running more than one daemon pod May 12 20:13:36.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:36.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:36.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:36.558: INFO: Number of nodes with available pods: 0 May 12 20:13:36.558: INFO: Node node1 is running more than one daemon pod May 12 20:13:37.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:37.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:37.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:37.558: INFO: Number of nodes with available pods: 0 May 12 20:13:37.558: INFO: Node node1 is running more than one daemon pod May 12 20:13:38.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:38.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:38.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:38.556: INFO: Number of nodes with available pods: 0 May 12 20:13:38.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:39.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:39.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:39.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:39.555: INFO: Number of nodes with available pods: 0 May 12 20:13:39.555: INFO: Node node1 is running more than one daemon pod May 12 20:13:40.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:40.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:40.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:40.558: INFO: Number of nodes with available pods: 0 May 12 20:13:40.558: INFO: Node node1 is running more than one daemon pod May 12 20:13:41.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:41.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:41.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:41.559: INFO: Number of nodes with available pods: 0 May 12 20:13:41.559: INFO: Node node1 is running more than one daemon pod May 12 20:13:42.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:42.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:42.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:42.556: INFO: Number of nodes with available pods: 0 May 12 20:13:42.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:43.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:43.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:43.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:43.557: INFO: Number of nodes with available pods: 0 May 12 20:13:43.557: INFO: Node node1 is running more than one daemon pod May 12 20:13:44.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:44.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:44.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:44.558: INFO: Number of nodes with available pods: 0 May 12 20:13:44.558: INFO: Node node1 is running more than one daemon pod May 12 20:13:45.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:45.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:45.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:45.557: INFO: Number of nodes with available pods: 0 May 12 20:13:45.557: INFO: Node node1 is running more than one daemon pod May 12 20:13:46.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:46.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:46.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:46.558: INFO: Number of nodes with available pods: 0 May 12 20:13:46.558: INFO: Node node1 is running more than one daemon pod May 12 20:13:47.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:47.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:47.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:47.558: INFO: Number of nodes with available pods: 0 May 12 20:13:47.558: INFO: Node node1 is running more than one daemon pod May 12 20:13:48.557: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:48.557: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:48.557: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:48.560: INFO: Number of nodes with available pods: 0 May 12 20:13:48.560: INFO: Node node1 is running more than one daemon pod May 12 20:13:49.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:49.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:49.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:49.557: INFO: Number of nodes with available pods: 0 May 12 20:13:49.557: INFO: Node node1 is running more than one daemon pod May 12 20:13:50.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:50.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:50.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:50.555: INFO: Number of nodes with available pods: 0 May 12 20:13:50.555: INFO: Node node1 is running more than one daemon pod May 12 20:13:51.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:51.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:51.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:51.556: INFO: Number of nodes with available pods: 0 May 12 20:13:51.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:52.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:52.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:52.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:52.556: INFO: Number of nodes with available pods: 0 May 12 20:13:52.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:53.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:53.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:53.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:53.555: INFO: Number of nodes with available pods: 0 May 12 20:13:53.555: INFO: Node node1 is running more than one daemon pod May 12 20:13:54.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:54.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:54.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:54.558: INFO: Number of nodes with available pods: 0 May 12 20:13:54.558: INFO: Node node1 is running more than one daemon pod May 12 20:13:55.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:55.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:55.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:55.558: INFO: Number of nodes with available pods: 0 May 12 20:13:55.558: INFO: Node node1 is running more than one daemon pod May 12 20:13:56.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:56.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:56.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:56.556: INFO: Number of nodes with available pods: 0 May 12 20:13:56.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:57.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:57.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:57.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:57.556: INFO: Number of nodes with available pods: 0 May 12 20:13:57.556: INFO: Node node1 is running more than one daemon pod May 12 20:13:58.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:58.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:58.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:58.555: INFO: Number of nodes with available pods: 0 May 12 20:13:58.555: INFO: Node node1 is running more than one daemon pod May 12 20:13:59.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:59.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:59.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:13:59.556: INFO: Number of nodes with available pods: 0 May 12 20:13:59.556: INFO: Node node1 is running more than one daemon pod May 12 20:14:00.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:00.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:00.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:00.555: INFO: Number of nodes with available pods: 0 May 12 20:14:00.555: INFO: Node node1 is running more than one daemon pod May 12 20:14:01.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:01.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:01.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:01.556: INFO: Number of nodes with available pods: 0 May 12 20:14:01.556: INFO: Node node1 is running more than one daemon pod May 12 20:14:02.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:02.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:02.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:02.556: INFO: Number of nodes with available pods: 0 May 12 20:14:02.556: INFO: Node node1 is running more than one daemon pod May 12 20:14:03.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:03.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:03.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:03.558: INFO: Number of nodes with available pods: 0 May 12 20:14:03.559: INFO: Node node1 is running more than one daemon pod May 12 20:14:04.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:04.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:04.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:04.558: INFO: Number of nodes with available pods: 0 May 12 20:14:04.558: INFO: Node node1 is running more than one daemon pod May 12 20:14:05.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:05.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:05.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:05.556: INFO: Number of nodes with available pods: 0 May 12 20:14:05.556: INFO: Node node1 is running more than one daemon pod May 12 20:14:06.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:06.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:06.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:06.557: INFO: Number of nodes with available pods: 0 May 12 20:14:06.557: INFO: Node node1 is running more than one daemon pod May 12 20:14:07.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:07.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:07.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:07.557: INFO: Number of nodes with available pods: 0 May 12 20:14:07.557: INFO: Node node1 is running more than one daemon pod May 12 20:14:08.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:08.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:08.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:08.556: INFO: Number of nodes with available pods: 0 May 12 20:14:08.556: INFO: Node node1 is running more than one daemon pod May 12 20:14:09.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:09.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:09.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:09.557: INFO: Number of nodes with available pods: 0 May 12 20:14:09.557: INFO: Node node1 is running more than one daemon pod May 12 20:14:10.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:10.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:10.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:10.556: INFO: Number of nodes with available pods: 0 May 12 20:14:10.556: INFO: Node node1 is running more than one daemon pod May 12 20:14:11.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:11.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:11.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:11.557: INFO: Number of nodes with available pods: 0 May 12 20:14:11.557: INFO: Node node1 is running more than one daemon pod May 12 20:14:12.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:12.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:12.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:12.558: INFO: Number of nodes with available pods: 0 May 12 20:14:12.558: INFO: Node node1 is running more than one daemon pod May 12 20:14:13.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:13.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:13.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:13.556: INFO: Number of nodes with available pods: 0 May 12 20:14:13.557: INFO: Node node1 is running more than one daemon pod May 12 20:14:14.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:14.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:14.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:14.556: INFO: Number of nodes with available pods: 0 May 12 20:14:14.556: INFO: Node node1 is running more than one daemon pod May 12 20:14:15.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:15.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:15.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:15.556: INFO: Number of nodes with available pods: 0 May 12 20:14:15.557: INFO: Node node1 is running more than one daemon pod May 12 20:14:16.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:16.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:16.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:16.557: INFO: Number of nodes with available pods: 0 May 12 20:14:16.557: INFO: Node node1 is running more than one daemon pod May 12 20:14:17.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:17.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:17.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:17.558: INFO: Number of nodes with available pods: 0 May 12 20:14:17.558: INFO: Node node1 is running more than one daemon pod May 12 20:14:17.563: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:17.563: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:17.563: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 20:14:17.566: INFO: Number of nodes with available pods: 0 May 12 20:14:17.566: INFO: Node node1 is running more than one daemon pod May 12 20:14:17.567: FAIL: error waiting for daemon pod to start Unexpected error: <*errors.errorString | 0xc0002bc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func3.7() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:323 +0x4d5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001136300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001136300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001136300, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2297, will wait for the garbage collector to delete the pods May 12 20:14:17.629: INFO: Deleting DaemonSet.extensions daemon-set took: 5.640979ms May 12 20:14:18.329: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.362103ms May 12 20:14:21.432: INFO: Number of nodes with available pods: 0 May 12 20:14:21.432: INFO: Number of running nodes: 0, number of available pods: 0 May 12 20:14:21.439: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2297/daemonsets","resourceVersion":"79385"},"items":null} May 12 20:14:21.442: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2297/pods","resourceVersion":"79385"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "daemonsets-2297". STEP: Found 19 events. May 12 20:14:21.454: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for daemon-set-f7j7v: { } Scheduled: Successfully assigned daemonsets-2297/daemon-set-f7j7v to node1 May 12 20:14:21.454: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for daemon-set-qbph7: { } Scheduled: Successfully assigned daemonsets-2297/daemon-set-qbph7 to node2 May 12 20:14:21.454: INFO: At 2021-05-12 20:09:17 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-f7j7v May 12 20:14:21.454: INFO: At 2021-05-12 20:09:17 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-qbph7 May 12 20:14:21.454: INFO: At 2021-05-12 20:09:19 +0000 UTC - event for daemon-set-f7j7v: {multus } AddedInterface: Add eth0 [10.244.3.29/24] May 12 20:14:21.454: INFO: At 2021-05-12 20:09:19 +0000 UTC - event for daemon-set-f7j7v: {kubelet node1} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine" May 12 20:14:21.454: INFO: At 2021-05-12 20:09:19 +0000 UTC - event for daemon-set-qbph7: {kubelet node2} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine" May 12 20:14:21.454: INFO: At 2021-05-12 20:09:19 +0000 UTC - event for daemon-set-qbph7: {multus } AddedInterface: Add eth0 [10.244.4.102/24] May 12 20:14:21.454: INFO: At 2021-05-12 20:09:20 +0000 UTC - event for daemon-set-f7j7v: {kubelet node1} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 12 20:14:21.454: INFO: At 2021-05-12 20:09:20 +0000 UTC - event for daemon-set-f7j7v: {kubelet node1} Failed: Error: ErrImagePull May 12 20:14:21.454: INFO: At 2021-05-12 20:09:21 +0000 UTC - event for daemon-set-f7j7v: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 12 20:14:21.454: INFO: At 2021-05-12 20:09:22 +0000 UTC - event for daemon-set-f7j7v: {kubelet node1} Failed: Error: ImagePullBackOff May 12 20:14:21.454: INFO: At 2021-05-12 20:09:22 +0000 UTC - event for daemon-set-f7j7v: {multus } AddedInterface: Add eth0 [10.244.3.30/24] May 12 20:14:21.454: INFO: At 2021-05-12 20:09:22 +0000 UTC - event for daemon-set-f7j7v: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/httpd:2.4.38-alpine" May 12 20:14:21.454: INFO: At 2021-05-12 20:09:24 +0000 UTC - event for daemon-set-qbph7: {kubelet node2} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 12 20:14:21.454: INFO: At 2021-05-12 20:09:24 +0000 UTC - event for daemon-set-qbph7: {kubelet node2} Failed: Error: ErrImagePull May 12 20:14:21.454: INFO: At 2021-05-12 20:09:24 +0000 UTC - event for daemon-set-qbph7: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/httpd:2.4.38-alpine" May 12 20:14:21.454: INFO: At 2021-05-12 20:09:24 +0000 UTC - event for daemon-set-qbph7: {kubelet node2} Failed: Error: ImagePullBackOff May 12 20:14:21.454: INFO: At 2021-05-12 20:09:40 +0000 UTC - event for daemon-set-f7j7v: {kubelet node1} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 12 20:14:21.457: INFO: POD NODE PHASE GRACE CONDITIONS May 12 20:14:21.457: INFO: May 12 20:14:21.460: INFO: Logging node info for node master1 May 12 20:14:21.463: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 03e64d13-444c-41eb-b6bd-3745f05cd1cd 79351 0 2021-05-12 16:30:43 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"ce:94:4b:fc:cf:bc"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-12 16:30:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-12 16:30:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-05-12 16:31:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-05-12 16:33:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:35:27 +0000 UTC,LastTransitionTime:2021-05-12 16:35:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 20:14:15 +0000 UTC,LastTransitionTime:2021-05-12 16:30:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 20:14:15 +0000 UTC,LastTransitionTime:2021-05-12 16:30:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 20:14:15 +0000 UTC,LastTransitionTime:2021-05-12 16:30:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 20:14:15 +0000 UTC,LastTransitionTime:2021-05-12 16:35:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:201a06a0695b47fdbdd7df0fdd94f4dc,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a7dddfd4-eda7-4a08-9750-d4a3c87c0cdd,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:60f4b5001bb5e7280fddf9143d3ed9bcde4e8016eef54522b5aea6bac9d9774b tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:1935bf73835eb6d4446668a5484bb3724d97d926b26b41c7bf064aa3a5a8bc5f tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 20:14:21.463: INFO: Logging kubelet events for node master1 May 12 20:14:21.466: INFO: Logging pods the kubelet thinks is on node master1 May 12 20:14:21.481: INFO: docker-registry-docker-registry-56cbc7bc58-lfjnt started at 2021-05-12 16:36:16 +0000 UTC (0+2 container statuses recorded) May 12 20:14:21.481: INFO: Container docker-registry ready: true, restart count 0 May 12 20:14:21.481: INFO: Container nginx ready: true, restart count 0 May 12 20:14:21.481: INFO: kube-apiserver-master1 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.481: INFO: Container kube-apiserver ready: true, restart count 0 May 12 20:14:21.481: INFO: kube-scheduler-master1 started at 2021-05-12 16:32:00 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.481: INFO: Container kube-scheduler ready: true, restart count 0 May 12 20:14:21.481: INFO: kube-proxy-v5zpq started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.481: INFO: Container kube-proxy ready: true, restart count 2 May 12 20:14:21.481: INFO: kube-flannel-rc2gh started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 20:14:21.481: INFO: Init container install-cni ready: true, restart count 2 May 12 20:14:21.481: INFO: Container kube-flannel ready: true, restart count 3 May 12 20:14:21.481: INFO: coredns-7677f9bb54-4lkxs started at 2021-05-12 16:33:53 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.481: INFO: Container coredns ready: true, restart count 2 May 12 20:14:21.481: INFO: kube-controller-manager-master1 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.481: INFO: Container kube-controller-manager ready: true, restart count 2 May 12 20:14:21.481: INFO: kube-multus-ds-amd64-rcwwm started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.481: INFO: Container kube-multus ready: true, restart count 1 May 12 20:14:21.481: INFO: node-exporter-wd9j2 started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 20:14:21.481: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:14:21.481: INFO: Container node-exporter ready: true, restart count 0 W0512 20:14:21.494921 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 20:14:21.519: INFO: Latency metrics for node master1 May 12 20:14:21.519: INFO: Logging node info for node master2 May 12 20:14:21.522: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 f3d8b0a9-b7ca-4942-bfc7-69b182fddff8 79345 0 2021-05-12 16:31:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:9d:92:dd:24:1d"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-12 16:31:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-12 16:31:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-12 16:33:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-12 16:33:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:36:36 +0000 UTC,LastTransitionTime:2021-05-12 16:36:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 20:14:13 +0000 UTC,LastTransitionTime:2021-05-12 16:31:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 20:14:13 +0000 UTC,LastTransitionTime:2021-05-12 16:31:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 20:14:13 +0000 UTC,LastTransitionTime:2021-05-12 16:31:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 20:14:13 +0000 UTC,LastTransitionTime:2021-05-12 16:33:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ef3726c355844802be21d744dd831d84,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:6feec048-83de-4ff4-ae36-12985bcda218,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 20:14:21.522: INFO: Logging kubelet events for node master2 May 12 20:14:21.524: INFO: Logging pods the kubelet thinks is on node master2 May 12 20:14:21.538: INFO: kube-flannel-pppch started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 20:14:21.539: INFO: Init container install-cni ready: true, restart count 2 May 12 20:14:21.539: INFO: Container kube-flannel ready: true, restart count 1 May 12 20:14:21.539: INFO: kube-multus-ds-amd64-4swdk started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.539: INFO: Container kube-multus ready: true, restart count 1 May 12 20:14:21.539: INFO: coredns-7677f9bb54-b628b started at 2021-05-12 16:33:48 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.539: INFO: Container coredns ready: true, restart count 1 May 12 20:14:21.539: INFO: node-exporter-dprww started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 20:14:21.539: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:14:21.539: INFO: Container node-exporter ready: true, restart count 0 May 12 20:14:21.539: INFO: kube-apiserver-master2 started at 2021-05-12 16:38:15 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.539: INFO: Container kube-apiserver ready: true, restart count 0 May 12 20:14:21.539: INFO: kube-controller-manager-master2 started at 2021-05-12 16:38:53 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.539: INFO: Container kube-controller-manager ready: true, restart count 2 May 12 20:14:21.539: INFO: kube-scheduler-master2 started at 2021-05-12 16:32:00 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.539: INFO: Container kube-scheduler ready: true, restart count 2 May 12 20:14:21.539: INFO: kube-proxy-6ljsd started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.539: INFO: Container kube-proxy ready: true, restart count 2 W0512 20:14:21.550898 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 20:14:21.574: INFO: Latency metrics for node master2 May 12 20:14:21.574: INFO: Logging node info for node master3 May 12 20:14:21.577: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 a0124635-fd55-4774-af26-32765b659025 79350 0 2021-05-12 16:31:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"8e:91:ce:de:d3:10"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-12 16:31:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-12 16:31:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-12 16:33:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-12 16:33:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-12 16:39:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:35:42 +0000 UTC,LastTransitionTime:2021-05-12 16:35:42 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 20:14:14 +0000 UTC,LastTransitionTime:2021-05-12 16:31:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 20:14:14 +0000 UTC,LastTransitionTime:2021-05-12 16:31:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 20:14:14 +0000 UTC,LastTransitionTime:2021-05-12 16:31:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 20:14:14 +0000 UTC,LastTransitionTime:2021-05-12 16:35:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8966dda31ac243c3b9142da7eb1a3315,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:29bf94f6-9eaa-4435-8e2a-f17b4721b8df,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 20:14:21.577: INFO: Logging kubelet events for node master3 May 12 20:14:21.579: INFO: Logging pods the kubelet thinks is on node master3 May 12 20:14:21.595: INFO: kube-proxy-d57dm started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.595: INFO: Container kube-proxy ready: true, restart count 1 May 12 20:14:21.595: INFO: kube-flannel-hmbzn started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 20:14:21.595: INFO: Init container install-cni ready: true, restart count 2 May 12 20:14:21.595: INFO: Container kube-flannel ready: true, restart count 2 May 12 20:14:21.595: INFO: dns-autoscaler-5b7b5c9b6f-8fzpk started at 2021-05-12 16:33:51 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.595: INFO: Container autoscaler ready: true, restart count 1 May 12 20:14:21.595: INFO: node-feature-discovery-controller-5bf5c49849-gdz9z started at 2021-05-12 16:38:58 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.595: INFO: Container nfd-controller ready: true, restart count 0 May 12 20:14:21.595: INFO: node-exporter-jzp84 started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 20:14:21.595: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:14:21.595: INFO: Container node-exporter ready: true, restart count 0 May 12 20:14:21.595: INFO: kube-scheduler-master3 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.595: INFO: Container kube-scheduler ready: true, restart count 3 May 12 20:14:21.595: INFO: kube-controller-manager-master3 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.595: INFO: Container kube-controller-manager ready: true, restart count 2 May 12 20:14:21.595: INFO: kube-multus-ds-amd64-4hlf8 started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.595: INFO: Container kube-multus ready: true, restart count 1 May 12 20:14:21.595: INFO: prometheus-operator-5bb8cb9d8f-sqgl4 started at 2021-05-12 16:42:55 +0000 UTC (0+2 container statuses recorded) May 12 20:14:21.595: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:14:21.595: INFO: Container prometheus-operator ready: true, restart count 0 May 12 20:14:21.595: INFO: kube-apiserver-master3 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.595: INFO: Container kube-apiserver ready: true, restart count 0 W0512 20:14:21.608600 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 20:14:21.633: INFO: Latency metrics for node master3 May 12 20:14:21.633: INFO: Logging node info for node node1 May 12 20:14:21.636: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 2f7b1644-6865-412d-b423-194903813633 79375 0 2021-05-12 16:32:43 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"6a:b5:62:81:d9:8e"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-12 16:32:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-12 16:32:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-12 16:33:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-12 16:39:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-12 16:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-12 19:02:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:36:30 +0000 UTC,LastTransitionTime:2021-05-12 16:36:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 20:14:20 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 20:14:20 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 20:14:20 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 20:14:20 +0000 UTC,LastTransitionTime:2021-05-12 16:33:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d9a9ef88c91340689744923e9951f78a,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:d2f244db-a37b-4dba-a8f9-6ea24c6cdb4e,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:96570ae3225d359c9667e2cbd57987388aa9d3be9d5a198ed9b677c7f9f4e450 localhost:30500/barometer-collectd:stable],SizeBytes:1464260582,},ContainerImage{Names:[@ :],SizeBytes:1002488025,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f178577724ef3b8118ca58277a27d594257de6c55a813b61e421df4008a7f73b cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:1636899c10870ab66c48d960a9df620f4f9e86a0c72fbacf36032d27404e7e6c golang:alpine3.12],SizeBytes:301156062,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:34ac17eb150f557f6ce94ea34e7c03f3899b717307cad633213a9488a431179f nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:155361826d160d9f566aede3ea35b34f1e0c6422720285b1c22467c2b21a90aa nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392919,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 20:14:21.637: INFO: Logging kubelet events for node node1 May 12 20:14:21.640: INFO: Logging pods the kubelet thinks is on node node1 May 12 20:14:21.658: INFO: nginx-proxy-node1 started at 2021-05-12 16:38:15 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.659: INFO: Container nginx-proxy ready: true, restart count 2 May 12 20:14:21.659: INFO: node-feature-discovery-worker-qtn84 started at 2021-05-12 16:38:48 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.659: INFO: Container nfd-worker ready: true, restart count 0 May 12 20:14:21.659: INFO: kubernetes-dashboard-86c6f9df5b-vkvbq started at 2021-05-12 16:33:53 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.659: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 12 20:14:21.659: INFO: prometheus-k8s-0 started at 2021-05-12 16:43:20 +0000 UTC (0+5 container statuses recorded) May 12 20:14:21.659: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 12 20:14:21.659: INFO: Container grafana ready: true, restart count 0 May 12 20:14:21.659: INFO: Container prometheus ready: true, restart count 1 May 12 20:14:21.659: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 12 20:14:21.659: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 12 20:14:21.659: INFO: collectd-5mpmz started at 2021-05-12 16:49:38 +0000 UTC (0+3 container statuses recorded) May 12 20:14:21.659: INFO: Container collectd ready: true, restart count 0 May 12 20:14:21.659: INFO: Container collectd-exporter ready: true, restart count 0 May 12 20:14:21.659: INFO: Container rbac-proxy ready: true, restart count 0 May 12 20:14:21.659: INFO: kube-proxy-r9vsx started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.659: INFO: Container kube-proxy ready: true, restart count 1 May 12 20:14:21.659: INFO: node-exporter-ddxbd started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 20:14:21.659: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:14:21.659: INFO: Container node-exporter ready: true, restart count 0 May 12 20:14:21.659: INFO: kube-flannel-r7w6z started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 20:14:21.659: INFO: Init container install-cni ready: true, restart count 1 May 12 20:14:21.659: INFO: Container kube-flannel ready: true, restart count 2 May 12 20:14:21.659: INFO: kube-multus-ds-amd64-fhzwc started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.659: INFO: Container kube-multus ready: true, restart count 1 May 12 20:14:21.659: INFO: kubernetes-metrics-scraper-678c97765c-s4sgj started at 2021-05-12 16:33:53 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.659: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 12 20:14:21.659: INFO: cmk-v4qwz started at 2021-05-12 16:42:07 +0000 UTC (0+2 container statuses recorded) May 12 20:14:21.659: INFO: Container nodereport ready: true, restart count 0 May 12 20:14:21.659: INFO: Container reconcile ready: true, restart count 0 May 12 20:14:21.659: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff started at 2021-05-12 16:39:41 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.659: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 20:14:21.659: INFO: cmk-init-discover-node1-2x2zk started at 2021-05-12 16:41:25 +0000 UTC (0+3 container statuses recorded) May 12 20:14:21.659: INFO: Container discover ready: false, restart count 0 May 12 20:14:21.659: INFO: Container init ready: false, restart count 0 May 12 20:14:21.659: INFO: Container install ready: false, restart count 0 W0512 20:14:21.672279 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 20:14:21.702: INFO: Latency metrics for node node1 May 12 20:14:21.702: INFO: Logging node info for node node2 May 12 20:14:21.706: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 16e4ef1f-0c51-44d5-a257-2eaa348c0d52 79381 0 2021-05-12 16:32:43 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"52:9a:ee:23:4c:ac"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-12 16:32:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-12 16:32:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-12 16:33:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-12 16:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-12 16:41:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-12 19:08:41 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-05-12 19:08:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:35:14 +0000 UTC,LastTransitionTime:2021-05-12 16:35:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 20:14:20 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 20:14:20 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 20:14:20 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 20:14:20 +0000 UTC,LastTransitionTime:2021-05-12 16:35:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:eebaf4858fee4a739009f2f7f2717953,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:15bd8119-98f5-4e13-a2fa-897c2224dd6e,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:96570ae3225d359c9667e2cbd57987388aa9d3be9d5a198ed9b677c7f9f4e450 localhost:30500/barometer-collectd:stable],SizeBytes:1464260582,},ContainerImage{Names:[localhost:30500/cmk@sha256:f178577724ef3b8118ca58277a27d594257de6c55a813b61e421df4008a7f73b localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[nginx@sha256:34ac17eb150f557f6ce94ea34e7c03f3899b717307cad633213a9488a431179f nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:155361826d160d9f566aede3ea35b34f1e0c6422720285b1c22467c2b21a90aa localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392919,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:60f4b5001bb5e7280fddf9143d3ed9bcde4e8016eef54522b5aea6bac9d9774b localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:1935bf73835eb6d4446668a5484bb3724d97d926b26b41c7bf064aa3a5a8bc5f localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 20:14:21.706: INFO: Logging kubelet events for node node2 May 12 20:14:21.708: INFO: Logging pods the kubelet thinks is on node node2 May 12 20:14:21.725: INFO: cmk-init-discover-node2-qrd9v started at 2021-05-12 16:41:44 +0000 UTC (0+3 container statuses recorded) May 12 20:14:21.725: INFO: Container discover ready: false, restart count 0 May 12 20:14:21.725: INFO: Container init ready: false, restart count 0 May 12 20:14:21.725: INFO: Container install ready: false, restart count 0 May 12 20:14:21.725: INFO: kube-proxy-grtqc started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.725: INFO: Container kube-proxy ready: true, restart count 2 May 12 20:14:21.725: INFO: cmk-gbw5d started at 2021-05-12 16:42:08 +0000 UTC (0+2 container statuses recorded) May 12 20:14:21.725: INFO: Container nodereport ready: true, restart count 0 May 12 20:14:21.725: INFO: Container reconcile ready: true, restart count 0 May 12 20:14:21.725: INFO: collectd-tng6x started at 2021-05-12 16:49:38 +0000 UTC (0+3 container statuses recorded) May 12 20:14:21.725: INFO: Container collectd ready: true, restart count 0 May 12 20:14:21.725: INFO: Container collectd-exporter ready: true, restart count 0 May 12 20:14:21.725: INFO: Container rbac-proxy ready: true, restart count 0 May 12 20:14:21.725: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-j6r4r started at 2021-05-12 16:39:41 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.725: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 20:14:21.725: INFO: node-exporter-h5rv7 started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 20:14:21.725: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:14:21.725: INFO: Container node-exporter ready: true, restart count 0 May 12 20:14:21.725: INFO: kube-multus-ds-amd64-k28rf started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.725: INFO: Container kube-multus ready: true, restart count 1 May 12 20:14:21.725: INFO: node-feature-discovery-worker-zjvzk started at 2021-05-12 16:38:48 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.725: INFO: Container nfd-worker ready: true, restart count 0 May 12 20:14:21.725: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-8xc75 started at 2021-05-12 16:46:05 +0000 UTC (0+2 container statuses recorded) May 12 20:14:21.725: INFO: Container tas-controller ready: true, restart count 0 May 12 20:14:21.725: INFO: Container tas-extender ready: true, restart count 0 May 12 20:14:21.725: INFO: cmk-webhook-6c9d5f8578-fgcvr started at 2021-05-12 16:42:08 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.725: INFO: Container cmk-webhook ready: true, restart count 0 May 12 20:14:21.725: INFO: nginx-proxy-node2 started at 2021-05-12 16:38:15 +0000 UTC (0+1 container statuses recorded) May 12 20:14:21.725: INFO: Container nginx-proxy ready: true, restart count 2 May 12 20:14:21.725: INFO: kube-flannel-rqtcs started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 20:14:21.725: INFO: Init container install-cni ready: true, restart count 2 May 12 20:14:21.725: INFO: Container kube-flannel ready: true, restart count 1 W0512 20:14:21.737483 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 20:14:21.769: INFO: Latency metrics for node node2 May 12 20:14:21.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2297" for this suite. • Failure [304.271 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not update pod when spec was updated and update strategy is OnDelete [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:312 May 12 20:14:17.567: error waiting for daemon pod to start Unexpected error: <*errors.errorString | 0xc0002bc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:323 ------------------------------ {"msg":"FAILED [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete","total":4,"completed":0,"skipped":2226,"failed":2,"failures":["[sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer =\u003e should not allow an eviction [Serial]","[sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 20:14:21.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:68 [It] evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 STEP: Waiting for the pdb to be processed STEP: locating a running pod May 12 20:24:23.834: FAIL: Unexpected error: <*errors.errorString | 0xc0002bc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func5.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:241 +0x16a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001136300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001136300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001136300, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "disruption-3307". STEP: Found 4 events. May 12 20:24:23.843: INFO: At 2021-05-12 20:14:21 +0000 UTC - event for foo: {controllermanager } NoPods: No matching pods found May 12 20:24:23.843: INFO: At 2021-05-12 20:14:21 +0000 UTC - event for rs: {replicaset-controller } FailedCreate: Error creating: pods "rs-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9100]] May 12 20:24:23.843: INFO: At 2021-05-12 20:14:21 +0000 UTC - event for rs: {replicaset-controller } FailedCreate: Error creating: pods "rs-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: []] May 12 20:24:23.843: INFO: At 2021-05-12 20:14:23 +0000 UTC - event for rs: {replicaset-controller } FailedCreate: Error creating: pods "rs-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9103-9104]] May 12 20:24:23.845: INFO: POD NODE PHASE GRACE CONDITIONS May 12 20:24:23.845: INFO: May 12 20:24:23.850: INFO: Logging node info for node master1 May 12 20:24:23.853: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 03e64d13-444c-41eb-b6bd-3745f05cd1cd 81535 0 2021-05-12 16:30:43 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"ce:94:4b:fc:cf:bc"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-12 16:30:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-12 16:30:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-05-12 16:31:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-05-12 16:33:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:35:27 +0000 UTC,LastTransitionTime:2021-05-12 16:35:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 20:24:17 +0000 UTC,LastTransitionTime:2021-05-12 16:30:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 20:24:17 +0000 UTC,LastTransitionTime:2021-05-12 16:30:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 20:24:17 +0000 UTC,LastTransitionTime:2021-05-12 16:30:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 20:24:17 +0000 UTC,LastTransitionTime:2021-05-12 16:35:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:201a06a0695b47fdbdd7df0fdd94f4dc,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:a7dddfd4-eda7-4a08-9750-d4a3c87c0cdd,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:60f4b5001bb5e7280fddf9143d3ed9bcde4e8016eef54522b5aea6bac9d9774b tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:1935bf73835eb6d4446668a5484bb3724d97d926b26b41c7bf064aa3a5a8bc5f tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 20:24:23.854: INFO: Logging kubelet events for node master1 May 12 20:24:23.856: INFO: Logging pods the kubelet thinks is on node master1 May 12 20:24:23.872: INFO: kube-proxy-v5zpq started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.872: INFO: Container kube-proxy ready: true, restart count 2 May 12 20:24:23.872: INFO: kube-flannel-rc2gh started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 20:24:23.872: INFO: Init container install-cni ready: true, restart count 2 May 12 20:24:23.872: INFO: Container kube-flannel ready: true, restart count 3 May 12 20:24:23.872: INFO: coredns-7677f9bb54-4lkxs started at 2021-05-12 16:33:53 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.872: INFO: Container coredns ready: true, restart count 2 May 12 20:24:23.872: INFO: docker-registry-docker-registry-56cbc7bc58-lfjnt started at 2021-05-12 16:36:16 +0000 UTC (0+2 container statuses recorded) May 12 20:24:23.872: INFO: Container docker-registry ready: true, restart count 0 May 12 20:24:23.872: INFO: Container nginx ready: true, restart count 0 May 12 20:24:23.872: INFO: kube-apiserver-master1 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.872: INFO: Container kube-apiserver ready: true, restart count 0 May 12 20:24:23.872: INFO: kube-scheduler-master1 started at 2021-05-12 16:32:00 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.872: INFO: Container kube-scheduler ready: true, restart count 0 May 12 20:24:23.872: INFO: node-exporter-wd9j2 started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 20:24:23.872: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:24:23.872: INFO: Container node-exporter ready: true, restart count 0 May 12 20:24:23.873: INFO: kube-controller-manager-master1 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.873: INFO: Container kube-controller-manager ready: true, restart count 2 May 12 20:24:23.873: INFO: kube-multus-ds-amd64-rcwwm started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.873: INFO: Container kube-multus ready: true, restart count 1 W0512 20:24:23.885944 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 20:24:23.913: INFO: Latency metrics for node master1 May 12 20:24:23.913: INFO: Logging node info for node master2 May 12 20:24:23.916: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 f3d8b0a9-b7ca-4942-bfc7-69b182fddff8 81526 0 2021-05-12 16:31:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:9d:92:dd:24:1d"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-12 16:31:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-12 16:31:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-12 16:33:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-12 16:33:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:36:36 +0000 UTC,LastTransitionTime:2021-05-12 16:36:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 20:24:15 +0000 UTC,LastTransitionTime:2021-05-12 16:31:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 20:24:15 +0000 UTC,LastTransitionTime:2021-05-12 16:31:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 20:24:15 +0000 UTC,LastTransitionTime:2021-05-12 16:31:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 20:24:15 +0000 UTC,LastTransitionTime:2021-05-12 16:33:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ef3726c355844802be21d744dd831d84,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:6feec048-83de-4ff4-ae36-12985bcda218,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 20:24:23.916: INFO: Logging kubelet events for node master2 May 12 20:24:23.919: INFO: Logging pods the kubelet thinks is on node master2 May 12 20:24:23.933: INFO: node-exporter-dprww started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 20:24:23.933: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:24:23.933: INFO: Container node-exporter ready: true, restart count 0 May 12 20:24:23.933: INFO: kube-apiserver-master2 started at 2021-05-12 16:38:15 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.933: INFO: Container kube-apiserver ready: true, restart count 0 May 12 20:24:23.933: INFO: kube-controller-manager-master2 started at 2021-05-12 16:38:53 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.933: INFO: Container kube-controller-manager ready: true, restart count 2 May 12 20:24:23.933: INFO: kube-scheduler-master2 started at 2021-05-12 16:32:00 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.933: INFO: Container kube-scheduler ready: true, restart count 2 May 12 20:24:23.933: INFO: kube-proxy-6ljsd started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.933: INFO: Container kube-proxy ready: true, restart count 2 May 12 20:24:23.933: INFO: kube-flannel-pppch started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 20:24:23.933: INFO: Init container install-cni ready: true, restart count 2 May 12 20:24:23.933: INFO: Container kube-flannel ready: true, restart count 1 May 12 20:24:23.933: INFO: kube-multus-ds-amd64-4swdk started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.933: INFO: Container kube-multus ready: true, restart count 1 May 12 20:24:23.933: INFO: coredns-7677f9bb54-b628b started at 2021-05-12 16:33:48 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.933: INFO: Container coredns ready: true, restart count 1 W0512 20:24:23.946537 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 20:24:23.969: INFO: Latency metrics for node master2 May 12 20:24:23.969: INFO: Logging node info for node master3 May 12 20:24:23.972: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 a0124635-fd55-4774-af26-32765b659025 81527 0 2021-05-12 16:31:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"8e:91:ce:de:d3:10"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-12 16:31:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-12 16:31:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-12 16:33:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-12 16:33:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-12 16:39:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:35:42 +0000 UTC,LastTransitionTime:2021-05-12 16:35:42 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 20:24:16 +0000 UTC,LastTransitionTime:2021-05-12 16:31:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 20:24:16 +0000 UTC,LastTransitionTime:2021-05-12 16:31:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 20:24:16 +0000 UTC,LastTransitionTime:2021-05-12 16:31:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 20:24:16 +0000 UTC,LastTransitionTime:2021-05-12 16:35:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8966dda31ac243c3b9142da7eb1a3315,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:29bf94f6-9eaa-4435-8e2a-f17b4721b8df,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 20:24:23.972: INFO: Logging kubelet events for node master3 May 12 20:24:23.974: INFO: Logging pods the kubelet thinks is on node master3 May 12 20:24:23.989: INFO: kube-controller-manager-master3 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.989: INFO: Container kube-controller-manager ready: true, restart count 2 May 12 20:24:23.989: INFO: kube-multus-ds-amd64-4hlf8 started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.989: INFO: Container kube-multus ready: true, restart count 1 May 12 20:24:23.989: INFO: prometheus-operator-5bb8cb9d8f-sqgl4 started at 2021-05-12 16:42:55 +0000 UTC (0+2 container statuses recorded) May 12 20:24:23.989: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:24:23.989: INFO: Container prometheus-operator ready: true, restart count 0 May 12 20:24:23.989: INFO: kube-apiserver-master3 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.989: INFO: Container kube-apiserver ready: true, restart count 0 May 12 20:24:23.989: INFO: kube-proxy-d57dm started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.989: INFO: Container kube-proxy ready: true, restart count 1 May 12 20:24:23.989: INFO: kube-flannel-hmbzn started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 20:24:23.989: INFO: Init container install-cni ready: true, restart count 2 May 12 20:24:23.989: INFO: Container kube-flannel ready: true, restart count 2 May 12 20:24:23.989: INFO: dns-autoscaler-5b7b5c9b6f-8fzpk started at 2021-05-12 16:33:51 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.989: INFO: Container autoscaler ready: true, restart count 1 May 12 20:24:23.989: INFO: node-feature-discovery-controller-5bf5c49849-gdz9z started at 2021-05-12 16:38:58 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.989: INFO: Container nfd-controller ready: true, restart count 0 May 12 20:24:23.989: INFO: node-exporter-jzp84 started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 20:24:23.989: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:24:23.989: INFO: Container node-exporter ready: true, restart count 0 May 12 20:24:23.989: INFO: kube-scheduler-master3 started at 2021-05-12 16:35:04 +0000 UTC (0+1 container statuses recorded) May 12 20:24:23.989: INFO: Container kube-scheduler ready: true, restart count 3 W0512 20:24:24.000948 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 20:24:24.031: INFO: Latency metrics for node master3 May 12 20:24:24.031: INFO: Logging node info for node node1 May 12 20:24:24.034: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 2f7b1644-6865-412d-b423-194903813633 81520 0 2021-05-12 16:32:43 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"6a:b5:62:81:d9:8e"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-12 16:32:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-12 16:32:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-12 16:33:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-12 16:39:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-12 16:41:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-12 19:02:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:36:30 +0000 UTC,LastTransitionTime:2021-05-12 16:36:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 20:24:15 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 20:24:15 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 20:24:15 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 20:24:15 +0000 UTC,LastTransitionTime:2021-05-12 16:33:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d9a9ef88c91340689744923e9951f78a,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:d2f244db-a37b-4dba-a8f9-6ea24c6cdb4e,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:96570ae3225d359c9667e2cbd57987388aa9d3be9d5a198ed9b677c7f9f4e450 localhost:30500/barometer-collectd:stable],SizeBytes:1464260582,},ContainerImage{Names:[@ :],SizeBytes:1002488025,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f178577724ef3b8118ca58277a27d594257de6c55a813b61e421df4008a7f73b cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:1636899c10870ab66c48d960a9df620f4f9e86a0c72fbacf36032d27404e7e6c golang:alpine3.12],SizeBytes:301156062,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:34ac17eb150f557f6ce94ea34e7c03f3899b717307cad633213a9488a431179f nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:155361826d160d9f566aede3ea35b34f1e0c6422720285b1c22467c2b21a90aa nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392919,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 20:24:24.035: INFO: Logging kubelet events for node node1 May 12 20:24:24.037: INFO: Logging pods the kubelet thinks is on node node1 May 12 20:24:24.061: INFO: kubernetes-dashboard-86c6f9df5b-vkvbq started at 2021-05-12 16:33:53 +0000 UTC (0+1 container statuses recorded) May 12 20:24:24.061: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 12 20:24:24.061: INFO: prometheus-k8s-0 started at 2021-05-12 16:43:20 +0000 UTC (0+5 container statuses recorded) May 12 20:24:24.061: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 12 20:24:24.061: INFO: Container grafana ready: true, restart count 0 May 12 20:24:24.061: INFO: Container prometheus ready: true, restart count 1 May 12 20:24:24.061: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 12 20:24:24.061: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 12 20:24:24.061: INFO: collectd-5mpmz started at 2021-05-12 16:49:38 +0000 UTC (0+3 container statuses recorded) May 12 20:24:24.061: INFO: Container collectd ready: true, restart count 0 May 12 20:24:24.062: INFO: Container collectd-exporter ready: true, restart count 0 May 12 20:24:24.062: INFO: Container rbac-proxy ready: true, restart count 0 May 12 20:24:24.062: INFO: kube-proxy-r9vsx started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 20:24:24.062: INFO: Container kube-proxy ready: true, restart count 1 May 12 20:24:24.062: INFO: node-exporter-ddxbd started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 20:24:24.062: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:24:24.062: INFO: Container node-exporter ready: true, restart count 0 May 12 20:24:24.062: INFO: kube-flannel-r7w6z started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 20:24:24.062: INFO: Init container install-cni ready: true, restart count 1 May 12 20:24:24.062: INFO: Container kube-flannel ready: true, restart count 2 May 12 20:24:24.062: INFO: kube-multus-ds-amd64-fhzwc started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 20:24:24.062: INFO: Container kube-multus ready: true, restart count 1 May 12 20:24:24.062: INFO: kubernetes-metrics-scraper-678c97765c-s4sgj started at 2021-05-12 16:33:53 +0000 UTC (0+1 container statuses recorded) May 12 20:24:24.062: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 12 20:24:24.062: INFO: cmk-v4qwz started at 2021-05-12 16:42:07 +0000 UTC (0+2 container statuses recorded) May 12 20:24:24.062: INFO: Container nodereport ready: true, restart count 0 May 12 20:24:24.062: INFO: Container reconcile ready: true, restart count 0 May 12 20:24:24.062: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff started at 2021-05-12 16:39:41 +0000 UTC (0+1 container statuses recorded) May 12 20:24:24.062: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 20:24:24.062: INFO: cmk-init-discover-node1-2x2zk started at 2021-05-12 16:41:25 +0000 UTC (0+3 container statuses recorded) May 12 20:24:24.062: INFO: Container discover ready: false, restart count 0 May 12 20:24:24.062: INFO: Container init ready: false, restart count 0 May 12 20:24:24.062: INFO: Container install ready: false, restart count 0 May 12 20:24:24.062: INFO: nginx-proxy-node1 started at 2021-05-12 16:38:15 +0000 UTC (0+1 container statuses recorded) May 12 20:24:24.062: INFO: Container nginx-proxy ready: true, restart count 2 May 12 20:24:24.062: INFO: node-feature-discovery-worker-qtn84 started at 2021-05-12 16:38:48 +0000 UTC (0+1 container statuses recorded) May 12 20:24:24.062: INFO: Container nfd-worker ready: true, restart count 0 W0512 20:24:24.074932 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 20:24:24.106: INFO: Latency metrics for node node1 May 12 20:24:24.106: INFO: Logging node info for node node2 May 12 20:24:24.109: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 16e4ef1f-0c51-44d5-a257-2eaa348c0d52 81549 0 2021-05-12 16:32:43 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"52:9a:ee:23:4c:ac"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-12 16:32:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-12 16:32:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-12 16:33:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-12 16:39:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-12 16:41:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-12 19:08:41 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-05-12 19:08:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-12 16:35:14 +0000 UTC,LastTransitionTime:2021-05-12 16:35:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-12 20:24:23 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-12 20:24:23 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-12 20:24:23 +0000 UTC,LastTransitionTime:2021-05-12 16:32:43 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-12 20:24:23 +0000 UTC,LastTransitionTime:2021-05-12 16:35:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:eebaf4858fee4a739009f2f7f2717953,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:15bd8119-98f5-4e13-a2fa-897c2224dd6e,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:96570ae3225d359c9667e2cbd57987388aa9d3be9d5a198ed9b677c7f9f4e450 localhost:30500/barometer-collectd:stable],SizeBytes:1464260582,},ContainerImage{Names:[localhost:30500/cmk@sha256:f178577724ef3b8118ca58277a27d594257de6c55a813b61e421df4008a7f73b localhost:30500/cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726657349,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[nginx@sha256:34ac17eb150f557f6ce94ea34e7c03f3899b717307cad633213a9488a431179f nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:155361826d160d9f566aede3ea35b34f1e0c6422720285b1c22467c2b21a90aa localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392919,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:60f4b5001bb5e7280fddf9143d3ed9bcde4e8016eef54522b5aea6bac9d9774b localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:1935bf73835eb6d4446668a5484bb3724d97d926b26b41c7bf064aa3a5a8bc5f localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 20:24:24.111: INFO: Logging kubelet events for node node2 May 12 20:24:24.113: INFO: Logging pods the kubelet thinks is on node node2 May 12 20:24:24.130: INFO: collectd-tng6x started at 2021-05-12 16:49:38 +0000 UTC (0+3 container statuses recorded) May 12 20:24:24.130: INFO: Container collectd ready: true, restart count 0 May 12 20:24:24.130: INFO: Container collectd-exporter ready: true, restart count 0 May 12 20:24:24.130: INFO: Container rbac-proxy ready: true, restart count 0 May 12 20:24:24.130: INFO: kube-multus-ds-amd64-k28rf started at 2021-05-12 16:33:28 +0000 UTC (0+1 container statuses recorded) May 12 20:24:24.130: INFO: Container kube-multus ready: true, restart count 1 May 12 20:24:24.130: INFO: node-feature-discovery-worker-zjvzk started at 2021-05-12 16:38:48 +0000 UTC (0+1 container statuses recorded) May 12 20:24:24.130: INFO: Container nfd-worker ready: true, restart count 0 May 12 20:24:24.130: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-j6r4r started at 2021-05-12 16:39:41 +0000 UTC (0+1 container statuses recorded) May 12 20:24:24.130: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 20:24:24.130: INFO: node-exporter-h5rv7 started at 2021-05-12 16:43:02 +0000 UTC (0+2 container statuses recorded) May 12 20:24:24.130: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 20:24:24.130: INFO: Container node-exporter ready: true, restart count 0 May 12 20:24:24.130: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-8xc75 started at 2021-05-12 16:46:05 +0000 UTC (0+2 container statuses recorded) May 12 20:24:24.130: INFO: Container tas-controller ready: true, restart count 0 May 12 20:24:24.130: INFO: Container tas-extender ready: true, restart count 0 May 12 20:24:24.130: INFO: nginx-proxy-node2 started at 2021-05-12 16:38:15 +0000 UTC (0+1 container statuses recorded) May 12 20:24:24.130: INFO: Container nginx-proxy ready: true, restart count 2 May 12 20:24:24.130: INFO: kube-flannel-rqtcs started at 2021-05-12 16:33:20 +0000 UTC (1+1 container statuses recorded) May 12 20:24:24.130: INFO: Init container install-cni ready: true, restart count 2 May 12 20:24:24.130: INFO: Container kube-flannel ready: true, restart count 1 May 12 20:24:24.130: INFO: cmk-webhook-6c9d5f8578-fgcvr started at 2021-05-12 16:42:08 +0000 UTC (0+1 container statuses recorded) May 12 20:24:24.130: INFO: Container cmk-webhook ready: true, restart count 0 May 12 20:24:24.130: INFO: cmk-init-discover-node2-qrd9v started at 2021-05-12 16:41:44 +0000 UTC (0+3 container statuses recorded) May 12 20:24:24.130: INFO: Container discover ready: false, restart count 0 May 12 20:24:24.130: INFO: Container init ready: false, restart count 0 May 12 20:24:24.130: INFO: Container install ready: false, restart count 0 May 12 20:24:24.130: INFO: kube-proxy-grtqc started at 2021-05-12 16:32:45 +0000 UTC (0+1 container statuses recorded) May 12 20:24:24.130: INFO: Container kube-proxy ready: true, restart count 2 May 12 20:24:24.130: INFO: cmk-gbw5d started at 2021-05-12 16:42:08 +0000 UTC (0+2 container statuses recorded) May 12 20:24:24.130: INFO: Container nodereport ready: true, restart count 0 May 12 20:24:24.130: INFO: Container reconcile ready: true, restart count 0 W0512 20:24:24.142891 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 12 20:24:24.175: INFO: Latency metrics for node node2 May 12 20:24:24.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-3307" for this suite. • Failure [602.390 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 May 12 20:24:23.834: Unexpected error: <*errors.errorString | 0xc0002bc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:241 ------------------------------ {"msg":"FAILED [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage =\u003e should not allow an eviction [Serial]","total":4,"completed":0,"skipped":3661,"failed":3,"failures":["[sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer =\u003e should not allow an eviction [Serial]","[sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete","[sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage =\u003e should not allow an eviction [Serial]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:229 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 20:24:24.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon with node affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:229 May 12 20:24:24.223: INFO: Creating daemon "daemon-set" with a node affinity STEP: Initially, daemon pods should not be running on any nodes. May 12 20:24:24.238: INFO: Number of nodes with available pods: 0 May 12 20:24:24.238: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 12 20:24:24.260: INFO: Number of nodes with available pods: 0 May 12 20:24:24.260: INFO: Node node1 is running more than one daemon pod May 12 20:24:25.264: INFO: Number of nodes with available pods: 0 May 12 20:24:25.264: INFO: Node node1 is running more than one daemon pod May 12 20:24:26.263: INFO: Number of nodes with available pods: 0 May 12 20:24:26.263: INFO: Node node1 is running more than one daemon pod May 12 20:24:27.265: INFO: Number of nodes with available pods: 0 May 12 20:24:27.265: INFO: Node node1 is running more than one daemon pod May 12 20:24:28.266: INFO: Number of nodes with available pods: 1 May 12 20:24:28.266: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Remove the node label and wait for daemons to be unscheduled May 12 20:24:28.278: INFO: Number of nodes with available pods: 1 May 12 20:24:28.278: INFO: Number of running nodes: 0, number of available pods: 1 May 12 20:24:29.281: INFO: Number of nodes with available pods: 0 May 12 20:24:29.281: INFO: Number of running nodes: 0, number of available pods: 0 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4328, will wait for the garbage collector to delete the pods May 12 20:24:29.341: INFO: Deleting DaemonSet.extensions daemon-set took: 5.45107ms May 12 20:24:29.341: INFO: Terminating DaemonSet.extensions daemon-set pods took: 34.933µs May 12 20:24:32.445: INFO: Number of nodes with available pods: 0 May 12 20:24:32.445: INFO: Number of running nodes: 0, number of available pods: 0 May 12 20:24:32.447: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4328/daemonsets","resourceVersion":"81634"},"items":null} May 12 20:24:32.449: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4328/pods","resourceVersion":"81634"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 20:24:32.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4328" for this suite. • [SLOW TEST:8.279 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon with node affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:229 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity","total":4,"completed":1,"skipped":4017,"failed":3,"failures":["[sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer =\u003e should not allow an eviction [Serial]","[sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete","[sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage =\u003e should not allow an eviction [Serial]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 12 20:24:32.485: INFO: Running AfterSuite actions on all nodes May 12 20:24:32.485: INFO: Running AfterSuite actions on node 1 May 12 20:24:32.485: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_apps_serial/junit_01.xml {"msg":"Test Suite completed","total":4,"completed":1,"skipped":5480,"failed":3,"failures":["[sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer =\u003e should not allow an eviction [Serial]","[sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete","[sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage =\u003e should not allow an eviction [Serial]"]} Summarizing 3 Failures: [Fail] [sig-apps] DisruptionController [It] evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:241 [Fail] [sig-apps] Daemon set [Serial] [It] should not update pod when spec was updated and update strategy is OnDelete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:323 [Fail] [sig-apps] DisruptionController [It] evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:241 Ran 4 of 5484 Specs in 1517.543 seconds FAIL! -- 1 Passed | 3 Failed | 0 Pending | 5480 Skipped --- FAIL: TestE2E (1517.60s) FAIL Ginkgo ran 1 suite in 25m18.755611952s Test Suite Failed