I0528 23:35:24.969602 22 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0528 23:35:24.969729 22 e2e.go:129] Starting e2e run "892c9d0b-50c7-4a43-98c0-bacd6fd04f5f" on Ginkgo node 1 {"msg":"Test Suite starting","total":4,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1622244923 - Will randomize all specs Will run 4 of 5484 specs May 28 23:35:24.984: INFO: >>> kubeConfig: /root/.kube/config May 28 23:35:24.988: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 28 23:35:25.017: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 28 23:35:25.084: INFO: The status of Pod cmk-init-discover-node1-rvqxm is Succeeded, skipping waiting May 28 23:35:25.084: INFO: The status of Pod cmk-init-discover-node2-95cbr is Succeeded, skipping waiting May 28 23:35:25.084: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 28 23:35:25.084: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 28 23:35:25.084: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 28 23:35:25.094: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 28 23:35:25.094: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 28 23:35:25.094: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 28 23:35:25.094: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 28 23:35:25.094: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 28 23:35:25.094: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 28 23:35:25.094: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 28 23:35:25.094: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 28 23:35:25.094: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 28 23:35:25.094: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 28 23:35:25.094: INFO: e2e test version: v1.19.11 May 28 23:35:25.095: INFO: kube-apiserver version: v1.19.8 May 28 23:35:25.095: INFO: >>> kubeConfig: /root/.kube/config May 28 23:35:25.101: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 23:35:25.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption May 28 23:35:25.127: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 28 23:35:25.130: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:68 [It] evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 STEP: Waiting for the pdb to be processed STEP: locating a running pod May 28 23:45:25.158: FAIL: Unexpected error: <*errors.errorString | 0xc0003021f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func5.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:241 +0x16a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a81500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001a81500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001a81500, 0x4de5140) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "disruption-6843". STEP: Found 3 events. May 28 23:45:25.162: INFO: At 2021-05-28 23:35:25 +0000 UTC - event for foo: {controllermanager } NoPods: No matching pods found May 28 23:45:25.162: INFO: At 2021-05-28 23:35:25 +0000 UTC - event for rs: {replicaset-controller } FailedCreate: Error creating: pods "rs-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: []] May 28 23:45:25.162: INFO: At 2021-05-28 23:35:25 +0000 UTC - event for rs: {replicaset-controller } FailedCreate: Error creating: pods "rs-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9103-9104]] May 28 23:45:25.165: INFO: POD NODE PHASE GRACE CONDITIONS May 28 23:45:25.165: INFO: May 28 23:45:25.169: INFO: Logging node info for node master1 May 28 23:45:25.172: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 0aa78934-442a-44a3-8c5c-f827e18dd3d7 81109 0 2021-05-28 19:56:25 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"0a:41:0b:9d:15:5a"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:56:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:02:03 +0000 UTC,LastTransitionTime:2021-05-28 20:02:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-28 23:45:20 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-28 23:45:20 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-28 23:45:20 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-28 23:45:20 +0000 UTC,LastTransitionTime:2021-05-28 19:59:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7fb2c462cae4b9c990ab2e5c72f7816,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:24c06694-15ae-4da4-9143-144d98afdd8d,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:7f3d9945acdf5d86edd89b2b16fe1f6d63ba8bdb4cab50e66f9bce162df9e388 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:9af6075c93013910787a4e97973da6e0739a86dee1186d7965a5d00b1ac35636 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 28 23:45:25.172: INFO: Logging kubelet events for node master1 May 28 23:45:25.175: INFO: Logging pods the kubelet thinks is on node master1 May 28 23:45:25.190: INFO: kube-apiserver-master1 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.190: INFO: Container kube-apiserver ready: true, restart count 0 May 28 23:45:25.190: INFO: kube-controller-manager-master1 started at 2021-05-28 19:57:39 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.190: INFO: Container kube-controller-manager ready: true, restart count 2 May 28 23:45:25.190: INFO: kube-multus-ds-amd64-n9j8k started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.190: INFO: Container kube-multus ready: true, restart count 1 May 28 23:45:25.190: INFO: docker-registry-docker-registry-56cbc7bc58-rbghz started at 2021-05-28 20:02:55 +0000 UTC (0+2 container statuses recorded) May 28 23:45:25.190: INFO: Container docker-registry ready: true, restart count 0 May 28 23:45:25.190: INFO: Container nginx ready: true, restart count 0 May 28 23:45:25.190: INFO: prometheus-operator-5bb8cb9d8f-7wdtq started at 2021-05-28 20:10:02 +0000 UTC (0+2 container statuses recorded) May 28 23:45:25.190: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 28 23:45:25.190: INFO: Container prometheus-operator ready: true, restart count 0 May 28 23:45:25.190: INFO: kube-scheduler-master1 started at 2021-05-28 19:57:39 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.190: INFO: Container kube-scheduler ready: true, restart count 0 May 28 23:45:25.190: INFO: kube-proxy-994p2 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.190: INFO: Container kube-proxy ready: true, restart count 1 May 28 23:45:25.190: INFO: kube-flannel-d54gm started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 28 23:45:25.190: INFO: Init container install-cni ready: true, restart count 0 May 28 23:45:25.190: INFO: Container kube-flannel ready: true, restart count 2 May 28 23:45:25.190: INFO: coredns-7677f9bb54-zb7h8 started at 2021-05-28 19:59:33 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.190: INFO: Container coredns ready: true, restart count 1 May 28 23:45:25.190: INFO: node-exporter-9b7pq started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 28 23:45:25.190: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 28 23:45:25.190: INFO: Container node-exporter ready: true, restart count 0 W0528 23:45:25.204362 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 28 23:45:25.236: INFO: Latency metrics for node master1 May 28 23:45:25.236: INFO: Logging node info for node master2 May 28 23:45:25.239: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 b80f32b6-a396-4f09-a110-345a08d762ee 81090 0 2021-05-28 19:57:04 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"b2:be:c9:d8:cf:bb"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:57:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:57:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-28 20:06:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:00:49 +0000 UTC,LastTransitionTime:2021-05-28 20:00:49 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-28 23:45:16 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-28 23:45:16 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-28 23:45:16 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-28 23:45:16 +0000 UTC,LastTransitionTime:2021-05-28 20:00:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2746caf91c53460599f165aa716150cd,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:b63b522f-706f-4e28-a104-c73edcd04319,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 28 23:45:25.239: INFO: Logging kubelet events for node master2 May 28 23:45:25.242: INFO: Logging pods the kubelet thinks is on node master2 May 28 23:45:25.256: INFO: dns-autoscaler-5b7b5c9b6f-r797x started at 2021-05-28 19:59:31 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.256: INFO: Container autoscaler ready: true, restart count 1 May 28 23:45:25.256: INFO: node-exporter-frch9 started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 28 23:45:25.256: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 28 23:45:25.256: INFO: Container node-exporter ready: true, restart count 0 May 28 23:45:25.256: INFO: kube-scheduler-master2 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.256: INFO: Container kube-scheduler ready: true, restart count 3 May 28 23:45:25.256: INFO: kube-proxy-jkbl8 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.256: INFO: Container kube-proxy ready: true, restart count 1 May 28 23:45:25.256: INFO: kube-flannel-xvtkj started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 28 23:45:25.256: INFO: Init container install-cni ready: true, restart count 0 May 28 23:45:25.256: INFO: Container kube-flannel ready: true, restart count 2 May 28 23:45:25.256: INFO: kube-multus-ds-amd64-qjwcz started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.256: INFO: Container kube-multus ready: true, restart count 1 May 28 23:45:25.256: INFO: node-feature-discovery-controller-5bf5c49849-n9ncl started at 2021-05-28 20:05:52 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.256: INFO: Container nfd-controller ready: true, restart count 0 May 28 23:45:25.256: INFO: kube-apiserver-master2 started at 2021-05-28 20:05:31 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.256: INFO: Container kube-apiserver ready: true, restart count 0 May 28 23:45:25.256: INFO: kube-controller-manager-master2 started at 2021-05-28 20:05:41 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.256: INFO: Container kube-controller-manager ready: true, restart count 3 W0528 23:45:25.268467 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 28 23:45:25.294: INFO: Latency metrics for node master2 May 28 23:45:25.294: INFO: Logging node info for node master3 May 28 23:45:25.297: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 301b0b5b-fc42-4c78-adb7-75baf6e0cc7e 81097 0 2021-05-28 19:57:14 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"52:fa:ab:49:88:02"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:57:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:57:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:02:12 +0000 UTC,LastTransitionTime:2021-05-28 20:02:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-28 23:45:18 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-28 23:45:18 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-28 23:45:18 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-28 23:45:18 +0000 UTC,LastTransitionTime:2021-05-28 19:59:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a0cb6c0eb1d842469076fff344213c13,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:c6adcff4-8bf7-40d7-9d14-54b1c6a87bc8,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 28 23:45:25.297: INFO: Logging kubelet events for node master3 May 28 23:45:25.299: INFO: Logging pods the kubelet thinks is on node master3 May 28 23:45:25.312: INFO: kube-proxy-t5bh6 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.312: INFO: Container kube-proxy ready: true, restart count 1 May 28 23:45:25.312: INFO: kube-flannel-zrskq started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 28 23:45:25.312: INFO: Init container install-cni ready: true, restart count 0 May 28 23:45:25.312: INFO: Container kube-flannel ready: true, restart count 1 May 28 23:45:25.312: INFO: kube-multus-ds-amd64-wqgf7 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.312: INFO: Container kube-multus ready: true, restart count 1 May 28 23:45:25.312: INFO: coredns-7677f9bb54-8v554 started at 2021-05-28 19:59:28 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.312: INFO: Container coredns ready: true, restart count 1 May 28 23:45:25.312: INFO: node-exporter-w42s5 started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 28 23:45:25.312: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 28 23:45:25.312: INFO: Container node-exporter ready: true, restart count 0 May 28 23:45:25.312: INFO: kube-apiserver-master3 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.312: INFO: Container kube-apiserver ready: true, restart count 0 May 28 23:45:25.312: INFO: kube-controller-manager-master3 started at 2021-05-28 20:06:02 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.312: INFO: Container kube-controller-manager ready: true, restart count 1 May 28 23:45:25.312: INFO: kube-scheduler-master3 started at 2021-05-28 20:01:23 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.312: INFO: Container kube-scheduler ready: true, restart count 1 W0528 23:45:25.326621 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 28 23:45:25.357: INFO: Latency metrics for node master3 May 28 23:45:25.357: INFO: Logging node info for node node1 May 28 23:45:25.360: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 43e51cb4-5acb-42b5-8f26-cd5e977f3829 81089 0 2021-05-28 19:58:22 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"d2:9d:b7:73:58:07"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-28 20:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-28 20:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-28 22:27:50 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-05-28 22:35:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:01:58 +0000 UTC,LastTransitionTime:2021-05-28 20:01:58 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-28 23:45:15 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-28 23:45:15 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-28 23:45:15 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-28 23:45:15 +0000 UTC,LastTransitionTime:2021-05-28 19:59:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:abe6e95dbfa24a9abd34d8fa2abe7655,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:17719d1f-7df5-4d95-81f3-7d3ac5110ba2,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:d731a0fc49b9ad6125b8d5dcb29da2b60bc940b48eacb6f5a9eb2a55c10598db localhost:30500/barometer-collectd:stable],SizeBytes:1464395058,},ContainerImage{Names:[@ :],SizeBytes:1002495332,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:97953d03767e4c2eb5d156394aeaf4bb0b74f3fd1ad08c303cb7561e272a00ff cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:aa24a0a337084e0747e7c8e97e1131270ae38150e691314f1fa19f4b2f9093c0 golang:alpine3.12],SizeBytes:301156062,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2bec7a43da8efe70cb7cb14020a6b10aecd02c87e020d394de84e6807e2cf620 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392623,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 28 23:45:25.361: INFO: Logging kubelet events for node node1 May 28 23:45:25.363: INFO: Logging pods the kubelet thinks is on node node1 May 28 23:45:25.382: INFO: kube-proxy-lsngv started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.382: INFO: Container kube-proxy ready: true, restart count 2 May 28 23:45:25.382: INFO: kube-multus-ds-amd64-x7826 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.382: INFO: Container kube-multus ready: true, restart count 1 May 28 23:45:25.382: INFO: kubernetes-metrics-scraper-678c97765c-wblkm started at 2021-05-28 19:59:33 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.382: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 28 23:45:25.382: INFO: nginx-proxy-node1 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.382: INFO: Container nginx-proxy ready: true, restart count 1 May 28 23:45:25.382: INFO: cmk-jhzjr started at 2021-05-28 20:09:15 +0000 UTC (0+2 container statuses recorded) May 28 23:45:25.382: INFO: Container nodereport ready: true, restart count 0 May 28 23:45:25.382: INFO: Container reconcile ready: true, restart count 0 May 28 23:45:25.382: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zk2pt started at 2021-05-28 20:06:47 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.382: INFO: Container kube-sriovdp ready: true, restart count 0 May 28 23:45:25.382: INFO: node-exporter-khdpg started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 28 23:45:25.382: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 28 23:45:25.382: INFO: Container node-exporter ready: true, restart count 0 May 28 23:45:25.382: INFO: collectd-qw9nd started at 2021-05-28 20:16:29 +0000 UTC (0+3 container statuses recorded) May 28 23:45:25.382: INFO: Container collectd ready: true, restart count 0 May 28 23:45:25.382: INFO: Container collectd-exporter ready: true, restart count 0 May 28 23:45:25.382: INFO: Container rbac-proxy ready: true, restart count 0 May 28 23:45:25.382: INFO: kube-flannel-2tjjt started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 28 23:45:25.382: INFO: Init container install-cni ready: true, restart count 0 May 28 23:45:25.382: INFO: Container kube-flannel ready: true, restart count 2 May 28 23:45:25.382: INFO: node-feature-discovery-worker-5x4qg started at 2021-05-28 20:05:52 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.382: INFO: Container nfd-worker ready: true, restart count 0 May 28 23:45:25.382: INFO: kubernetes-dashboard-86c6f9df5b-c5sbq started at 2021-05-28 19:59:33 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.382: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 28 23:45:25.382: INFO: prometheus-k8s-0 started at 2021-05-28 20:10:26 +0000 UTC (0+5 container statuses recorded) May 28 23:45:25.382: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 28 23:45:25.382: INFO: Container grafana ready: true, restart count 0 May 28 23:45:25.382: INFO: Container prometheus ready: true, restart count 1 May 28 23:45:25.382: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 28 23:45:25.382: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 28 23:45:25.382: INFO: cmk-init-discover-node1-rvqxm started at 2021-05-28 20:08:32 +0000 UTC (0+3 container statuses recorded) May 28 23:45:25.382: INFO: Container discover ready: false, restart count 0 May 28 23:45:25.382: INFO: Container init ready: false, restart count 0 May 28 23:45:25.382: INFO: Container install ready: false, restart count 0 W0528 23:45:25.395623 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 28 23:45:25.442: INFO: Latency metrics for node node1 May 28 23:45:25.442: INFO: Logging node info for node node2 May 28 23:45:25.446: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 3cc89580-b568-4c82-bd1f-200d0823da3b 81103 0 2021-05-28 19:58:22 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"62:22:2c:ae:14:ae"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-28 20:06:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-28 20:08:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-28 22:35:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:01:05 +0000 UTC,LastTransitionTime:2021-05-28 20:01:05 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-28 23:45:20 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-28 23:45:20 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-28 23:45:20 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-28 23:45:20 +0000 UTC,LastTransitionTime:2021-05-28 19:59:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b2730c4b09814ab9a78e7bc62c820fbb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:f1459072-d21d-46de-a5d9-46ec9349aae0,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:d731a0fc49b9ad6125b8d5dcb29da2b60bc940b48eacb6f5a9eb2a55c10598db localhost:30500/barometer-collectd:stable],SizeBytes:1464395058,},ContainerImage{Names:[localhost:30500/cmk@sha256:97953d03767e4c2eb5d156394aeaf4bb0b74f3fd1ad08c303cb7561e272a00ff localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2bec7a43da8efe70cb7cb14020a6b10aecd02c87e020d394de84e6807e2cf620 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392623,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:7f3d9945acdf5d86edd89b2b16fe1f6d63ba8bdb4cab50e66f9bce162df9e388 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:9af6075c93013910787a4e97973da6e0739a86dee1186d7965a5d00b1ac35636 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 28 23:45:25.446: INFO: Logging kubelet events for node node2 May 28 23:45:25.449: INFO: Logging pods the kubelet thinks is on node node2 May 28 23:45:25.465: INFO: nginx-proxy-node2 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.465: INFO: Container nginx-proxy ready: true, restart count 2 May 28 23:45:25.465: INFO: collectd-b5dgp started at 2021-05-28 20:16:29 +0000 UTC (0+3 container statuses recorded) May 28 23:45:25.465: INFO: Container collectd ready: true, restart count 0 May 28 23:45:25.465: INFO: Container collectd-exporter ready: true, restart count 0 May 28 23:45:25.465: INFO: Container rbac-proxy ready: true, restart count 0 May 28 23:45:25.465: INFO: kube-proxy-z5czn started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.465: INFO: Container kube-proxy ready: true, restart count 2 May 28 23:45:25.465: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-29vc6 started at 2021-05-28 20:06:47 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.465: INFO: Container kube-sriovdp ready: true, restart count 0 May 28 23:45:25.465: INFO: cmk-init-discover-node2-95cbr started at 2021-05-28 20:08:52 +0000 UTC (0+3 container statuses recorded) May 28 23:45:25.465: INFO: Container discover ready: false, restart count 0 May 28 23:45:25.465: INFO: Container init ready: false, restart count 0 May 28 23:45:25.465: INFO: Container install ready: false, restart count 0 May 28 23:45:25.465: INFO: kube-flannel-d9wsg started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 28 23:45:25.465: INFO: Init container install-cni ready: true, restart count 2 May 28 23:45:25.465: INFO: Container kube-flannel ready: true, restart count 2 May 28 23:45:25.465: INFO: kube-multus-ds-amd64-c9cj2 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.465: INFO: Container kube-multus ready: true, restart count 1 May 28 23:45:25.465: INFO: node-feature-discovery-worker-j2wnf started at 2021-05-28 20:05:52 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.465: INFO: Container nfd-worker ready: true, restart count 0 May 28 23:45:25.465: INFO: cmk-5vxwj started at 2021-05-28 20:09:16 +0000 UTC (0+2 container statuses recorded) May 28 23:45:25.465: INFO: Container nodereport ready: true, restart count 0 May 28 23:45:25.465: INFO: Container reconcile ready: true, restart count 0 May 28 23:45:25.465: INFO: cmk-webhook-6c9d5f8578-6cpp6 started at 2021-05-28 20:09:16 +0000 UTC (0+1 container statuses recorded) May 28 23:45:25.466: INFO: Container cmk-webhook ready: true, restart count 0 May 28 23:45:25.466: INFO: node-exporter-sjsht started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 28 23:45:25.466: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 28 23:45:25.466: INFO: Container node-exporter ready: true, restart count 0 May 28 23:45:25.466: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-csd8v started at 2021-05-28 20:13:00 +0000 UTC (0+2 container statuses recorded) May 28 23:45:25.466: INFO: Container tas-controller ready: true, restart count 0 May 28 23:45:25.466: INFO: Container tas-extender ready: true, restart count 0 W0528 23:45:25.477001 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 28 23:45:25.509: INFO: Latency metrics for node node2 May 28 23:45:25.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-6843" for this suite. • Failure [600.411 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 May 28 23:45:25.158: Unexpected error: <*errors.errorString | 0xc0003021f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:241 ------------------------------ {"msg":"FAILED [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer =\u003e should not allow an eviction [Serial]","total":4,"completed":0,"skipped":334,"failed":1,"failures":["[sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer =\u003e should not allow an eviction [Serial]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 23:45:25.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:68 [It] evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 STEP: Waiting for the pdb to be processed STEP: locating a running pod May 28 23:55:27.576: FAIL: Unexpected error: <*errors.errorString | 0xc0003021f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func5.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:241 +0x16a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a81500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001a81500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001a81500, 0x4de5140) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "disruption-8708". STEP: Found 4 events. May 28 23:55:27.580: INFO: At 2021-05-28 23:45:25 +0000 UTC - event for foo: {controllermanager } NoPods: No matching pods found May 28 23:55:27.580: INFO: At 2021-05-28 23:45:25 +0000 UTC - event for rs: {replicaset-controller } FailedCreate: Error creating: pods "rs-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: []] May 28 23:55:27.580: INFO: At 2021-05-28 23:45:25 +0000 UTC - event for rs: {replicaset-controller } FailedCreate: Error creating: pods "rs-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9100]] May 28 23:55:27.580: INFO: At 2021-05-28 23:45:25 +0000 UTC - event for rs: {replicaset-controller } FailedCreate: Error creating: pods "rs-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 5555: Host port 5555 is not allowed to be used. Allowed ports: [9103-9104]] May 28 23:55:27.582: INFO: POD NODE PHASE GRACE CONDITIONS May 28 23:55:27.582: INFO: May 28 23:55:27.587: INFO: Logging node info for node master1 May 28 23:55:27.590: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 0aa78934-442a-44a3-8c5c-f827e18dd3d7 83266 0 2021-05-28 19:56:25 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"0a:41:0b:9d:15:5a"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:56:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:02:03 +0000 UTC,LastTransitionTime:2021-05-28 20:02:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-28 23:55:23 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-28 23:55:23 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-28 23:55:23 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-28 23:55:23 +0000 UTC,LastTransitionTime:2021-05-28 19:59:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7fb2c462cae4b9c990ab2e5c72f7816,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:24c06694-15ae-4da4-9143-144d98afdd8d,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:7f3d9945acdf5d86edd89b2b16fe1f6d63ba8bdb4cab50e66f9bce162df9e388 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:9af6075c93013910787a4e97973da6e0739a86dee1186d7965a5d00b1ac35636 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 28 23:55:27.590: INFO: Logging kubelet events for node master1 May 28 23:55:27.594: INFO: Logging pods the kubelet thinks is on node master1 May 28 23:55:27.612: INFO: coredns-7677f9bb54-zb7h8 started at 2021-05-28 19:59:33 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.612: INFO: Container coredns ready: true, restart count 1 May 28 23:55:27.612: INFO: node-exporter-9b7pq started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 28 23:55:27.612: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 28 23:55:27.612: INFO: Container node-exporter ready: true, restart count 0 May 28 23:55:27.612: INFO: kube-proxy-994p2 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.612: INFO: Container kube-proxy ready: true, restart count 1 May 28 23:55:27.612: INFO: kube-flannel-d54gm started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 28 23:55:27.612: INFO: Init container install-cni ready: true, restart count 0 May 28 23:55:27.612: INFO: Container kube-flannel ready: true, restart count 2 May 28 23:55:27.612: INFO: docker-registry-docker-registry-56cbc7bc58-rbghz started at 2021-05-28 20:02:55 +0000 UTC (0+2 container statuses recorded) May 28 23:55:27.612: INFO: Container docker-registry ready: true, restart count 0 May 28 23:55:27.612: INFO: Container nginx ready: true, restart count 0 May 28 23:55:27.612: INFO: prometheus-operator-5bb8cb9d8f-7wdtq started at 2021-05-28 20:10:02 +0000 UTC (0+2 container statuses recorded) May 28 23:55:27.612: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 28 23:55:27.612: INFO: Container prometheus-operator ready: true, restart count 0 May 28 23:55:27.612: INFO: kube-scheduler-master1 started at 2021-05-28 19:57:39 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.612: INFO: Container kube-scheduler ready: true, restart count 0 May 28 23:55:27.612: INFO: kube-apiserver-master1 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.612: INFO: Container kube-apiserver ready: true, restart count 0 May 28 23:55:27.612: INFO: kube-controller-manager-master1 started at 2021-05-28 19:57:39 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.612: INFO: Container kube-controller-manager ready: true, restart count 2 May 28 23:55:27.612: INFO: kube-multus-ds-amd64-n9j8k started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.612: INFO: Container kube-multus ready: true, restart count 1 W0528 23:55:27.625747 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 28 23:55:27.659: INFO: Latency metrics for node master1 May 28 23:55:27.659: INFO: Logging node info for node master2 May 28 23:55:27.662: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 b80f32b6-a396-4f09-a110-345a08d762ee 83249 0 2021-05-28 19:57:04 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"b2:be:c9:d8:cf:bb"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:57:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:57:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-28 20:06:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:00:49 +0000 UTC,LastTransitionTime:2021-05-28 20:00:49 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-28 23:55:18 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-28 23:55:18 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-28 23:55:18 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-28 23:55:18 +0000 UTC,LastTransitionTime:2021-05-28 20:00:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2746caf91c53460599f165aa716150cd,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:b63b522f-706f-4e28-a104-c73edcd04319,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 28 23:55:27.662: INFO: Logging kubelet events for node master2 May 28 23:55:27.664: INFO: Logging pods the kubelet thinks is on node master2 May 28 23:55:27.679: INFO: kube-apiserver-master2 started at 2021-05-28 20:05:31 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.679: INFO: Container kube-apiserver ready: true, restart count 0 May 28 23:55:27.680: INFO: kube-controller-manager-master2 started at 2021-05-28 20:05:41 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.680: INFO: Container kube-controller-manager ready: true, restart count 3 May 28 23:55:27.680: INFO: kube-flannel-xvtkj started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 28 23:55:27.680: INFO: Init container install-cni ready: true, restart count 0 May 28 23:55:27.680: INFO: Container kube-flannel ready: true, restart count 2 May 28 23:55:27.680: INFO: kube-multus-ds-amd64-qjwcz started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.680: INFO: Container kube-multus ready: true, restart count 1 May 28 23:55:27.680: INFO: node-feature-discovery-controller-5bf5c49849-n9ncl started at 2021-05-28 20:05:52 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.680: INFO: Container nfd-controller ready: true, restart count 0 May 28 23:55:27.680: INFO: kube-scheduler-master2 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.680: INFO: Container kube-scheduler ready: true, restart count 3 May 28 23:55:27.680: INFO: kube-proxy-jkbl8 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.680: INFO: Container kube-proxy ready: true, restart count 1 May 28 23:55:27.680: INFO: dns-autoscaler-5b7b5c9b6f-r797x started at 2021-05-28 19:59:31 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.680: INFO: Container autoscaler ready: true, restart count 1 May 28 23:55:27.680: INFO: node-exporter-frch9 started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 28 23:55:27.680: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 28 23:55:27.680: INFO: Container node-exporter ready: true, restart count 0 W0528 23:55:27.693135 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 28 23:55:27.718: INFO: Latency metrics for node master2 May 28 23:55:27.718: INFO: Logging node info for node master3 May 28 23:55:27.720: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 301b0b5b-fc42-4c78-adb7-75baf6e0cc7e 83258 0 2021-05-28 19:57:14 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"52:fa:ab:49:88:02"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:57:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:57:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:02:12 +0000 UTC,LastTransitionTime:2021-05-28 20:02:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-28 23:55:20 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-28 23:55:20 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-28 23:55:20 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-28 23:55:20 +0000 UTC,LastTransitionTime:2021-05-28 19:59:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a0cb6c0eb1d842469076fff344213c13,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:c6adcff4-8bf7-40d7-9d14-54b1c6a87bc8,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 28 23:55:27.721: INFO: Logging kubelet events for node master3 May 28 23:55:27.723: INFO: Logging pods the kubelet thinks is on node master3 May 28 23:55:27.738: INFO: kube-apiserver-master3 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.738: INFO: Container kube-apiserver ready: true, restart count 0 May 28 23:55:27.739: INFO: kube-controller-manager-master3 started at 2021-05-28 20:06:02 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.739: INFO: Container kube-controller-manager ready: true, restart count 1 May 28 23:55:27.739: INFO: kube-scheduler-master3 started at 2021-05-28 20:01:23 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.739: INFO: Container kube-scheduler ready: true, restart count 1 May 28 23:55:27.739: INFO: kube-proxy-t5bh6 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.739: INFO: Container kube-proxy ready: true, restart count 1 May 28 23:55:27.739: INFO: kube-flannel-zrskq started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 28 23:55:27.739: INFO: Init container install-cni ready: true, restart count 0 May 28 23:55:27.739: INFO: Container kube-flannel ready: true, restart count 1 May 28 23:55:27.739: INFO: kube-multus-ds-amd64-wqgf7 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.739: INFO: Container kube-multus ready: true, restart count 1 May 28 23:55:27.739: INFO: coredns-7677f9bb54-8v554 started at 2021-05-28 19:59:28 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.739: INFO: Container coredns ready: true, restart count 1 May 28 23:55:27.739: INFO: node-exporter-w42s5 started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 28 23:55:27.739: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 28 23:55:27.739: INFO: Container node-exporter ready: true, restart count 0 W0528 23:55:27.751631 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 28 23:55:27.782: INFO: Latency metrics for node master3 May 28 23:55:27.782: INFO: Logging node info for node node1 May 28 23:55:27.785: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 43e51cb4-5acb-42b5-8f26-cd5e977f3829 83248 0 2021-05-28 19:58:22 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"d2:9d:b7:73:58:07"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-28 20:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-28 20:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-28 22:27:50 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-05-28 22:35:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:01:58 +0000 UTC,LastTransitionTime:2021-05-28 20:01:58 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-28 23:55:18 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-28 23:55:18 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-28 23:55:18 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-28 23:55:18 +0000 UTC,LastTransitionTime:2021-05-28 19:59:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:abe6e95dbfa24a9abd34d8fa2abe7655,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:17719d1f-7df5-4d95-81f3-7d3ac5110ba2,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:d731a0fc49b9ad6125b8d5dcb29da2b60bc940b48eacb6f5a9eb2a55c10598db localhost:30500/barometer-collectd:stable],SizeBytes:1464395058,},ContainerImage{Names:[@ :],SizeBytes:1002495332,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:97953d03767e4c2eb5d156394aeaf4bb0b74f3fd1ad08c303cb7561e272a00ff cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:aa24a0a337084e0747e7c8e97e1131270ae38150e691314f1fa19f4b2f9093c0 golang:alpine3.12],SizeBytes:301156062,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2bec7a43da8efe70cb7cb14020a6b10aecd02c87e020d394de84e6807e2cf620 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392623,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 28 23:55:27.786: INFO: Logging kubelet events for node node1 May 28 23:55:27.789: INFO: Logging pods the kubelet thinks is on node node1 May 28 23:55:27.809: INFO: kube-proxy-lsngv started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.809: INFO: Container kube-proxy ready: true, restart count 2 May 28 23:55:27.809: INFO: kube-multus-ds-amd64-x7826 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.809: INFO: Container kube-multus ready: true, restart count 1 May 28 23:55:27.809: INFO: nginx-proxy-node1 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.809: INFO: Container nginx-proxy ready: true, restart count 1 May 28 23:55:27.809: INFO: kubernetes-metrics-scraper-678c97765c-wblkm started at 2021-05-28 19:59:33 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.809: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 28 23:55:27.809: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zk2pt started at 2021-05-28 20:06:47 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.809: INFO: Container kube-sriovdp ready: true, restart count 0 May 28 23:55:27.809: INFO: cmk-jhzjr started at 2021-05-28 20:09:15 +0000 UTC (0+2 container statuses recorded) May 28 23:55:27.809: INFO: Container nodereport ready: true, restart count 0 May 28 23:55:27.809: INFO: Container reconcile ready: true, restart count 0 May 28 23:55:27.809: INFO: node-exporter-khdpg started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 28 23:55:27.809: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 28 23:55:27.809: INFO: Container node-exporter ready: true, restart count 0 May 28 23:55:27.809: INFO: kube-flannel-2tjjt started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 28 23:55:27.809: INFO: Init container install-cni ready: true, restart count 0 May 28 23:55:27.809: INFO: Container kube-flannel ready: true, restart count 2 May 28 23:55:27.809: INFO: collectd-qw9nd started at 2021-05-28 20:16:29 +0000 UTC (0+3 container statuses recorded) May 28 23:55:27.809: INFO: Container collectd ready: true, restart count 0 May 28 23:55:27.809: INFO: Container collectd-exporter ready: true, restart count 0 May 28 23:55:27.809: INFO: Container rbac-proxy ready: true, restart count 0 May 28 23:55:27.809: INFO: kubernetes-dashboard-86c6f9df5b-c5sbq started at 2021-05-28 19:59:33 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.809: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 28 23:55:27.809: INFO: node-feature-discovery-worker-5x4qg started at 2021-05-28 20:05:52 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.809: INFO: Container nfd-worker ready: true, restart count 0 May 28 23:55:27.809: INFO: cmk-init-discover-node1-rvqxm started at 2021-05-28 20:08:32 +0000 UTC (0+3 container statuses recorded) May 28 23:55:27.809: INFO: Container discover ready: false, restart count 0 May 28 23:55:27.809: INFO: Container init ready: false, restart count 0 May 28 23:55:27.809: INFO: Container install ready: false, restart count 0 May 28 23:55:27.809: INFO: prometheus-k8s-0 started at 2021-05-28 20:10:26 +0000 UTC (0+5 container statuses recorded) May 28 23:55:27.809: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 28 23:55:27.809: INFO: Container grafana ready: true, restart count 0 May 28 23:55:27.809: INFO: Container prometheus ready: true, restart count 1 May 28 23:55:27.809: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 28 23:55:27.809: INFO: Container rules-configmap-reloader ready: true, restart count 0 W0528 23:55:27.823208 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 28 23:55:27.857: INFO: Latency metrics for node node1 May 28 23:55:27.857: INFO: Logging node info for node node2 May 28 23:55:27.860: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 3cc89580-b568-4c82-bd1f-200d0823da3b 83264 0 2021-05-28 19:58:22 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"62:22:2c:ae:14:ae"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-28 20:06:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-28 20:08:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-28 22:35:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:01:05 +0000 UTC,LastTransitionTime:2021-05-28 20:01:05 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-28 23:55:22 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-28 23:55:22 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-28 23:55:22 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-28 23:55:22 +0000 UTC,LastTransitionTime:2021-05-28 19:59:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b2730c4b09814ab9a78e7bc62c820fbb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:f1459072-d21d-46de-a5d9-46ec9349aae0,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:d731a0fc49b9ad6125b8d5dcb29da2b60bc940b48eacb6f5a9eb2a55c10598db localhost:30500/barometer-collectd:stable],SizeBytes:1464395058,},ContainerImage{Names:[localhost:30500/cmk@sha256:97953d03767e4c2eb5d156394aeaf4bb0b74f3fd1ad08c303cb7561e272a00ff localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2bec7a43da8efe70cb7cb14020a6b10aecd02c87e020d394de84e6807e2cf620 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392623,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:7f3d9945acdf5d86edd89b2b16fe1f6d63ba8bdb4cab50e66f9bce162df9e388 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:9af6075c93013910787a4e97973da6e0739a86dee1186d7965a5d00b1ac35636 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 28 23:55:27.860: INFO: Logging kubelet events for node node2 May 28 23:55:27.862: INFO: Logging pods the kubelet thinks is on node node2 May 28 23:55:27.879: INFO: nginx-proxy-node2 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.879: INFO: Container nginx-proxy ready: true, restart count 2 May 28 23:55:27.879: INFO: collectd-b5dgp started at 2021-05-28 20:16:29 +0000 UTC (0+3 container statuses recorded) May 28 23:55:27.879: INFO: Container collectd ready: true, restart count 0 May 28 23:55:27.879: INFO: Container collectd-exporter ready: true, restart count 0 May 28 23:55:27.879: INFO: Container rbac-proxy ready: true, restart count 0 May 28 23:55:27.879: INFO: kube-proxy-z5czn started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.879: INFO: Container kube-proxy ready: true, restart count 2 May 28 23:55:27.879: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-29vc6 started at 2021-05-28 20:06:47 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.879: INFO: Container kube-sriovdp ready: true, restart count 0 May 28 23:55:27.879: INFO: cmk-init-discover-node2-95cbr started at 2021-05-28 20:08:52 +0000 UTC (0+3 container statuses recorded) May 28 23:55:27.879: INFO: Container discover ready: false, restart count 0 May 28 23:55:27.879: INFO: Container init ready: false, restart count 0 May 28 23:55:27.879: INFO: Container install ready: false, restart count 0 May 28 23:55:27.879: INFO: kube-flannel-d9wsg started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 28 23:55:27.879: INFO: Init container install-cni ready: true, restart count 2 May 28 23:55:27.879: INFO: Container kube-flannel ready: true, restart count 2 May 28 23:55:27.879: INFO: kube-multus-ds-amd64-c9cj2 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.879: INFO: Container kube-multus ready: true, restart count 1 May 28 23:55:27.879: INFO: node-feature-discovery-worker-j2wnf started at 2021-05-28 20:05:52 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.879: INFO: Container nfd-worker ready: true, restart count 0 May 28 23:55:27.879: INFO: cmk-5vxwj started at 2021-05-28 20:09:16 +0000 UTC (0+2 container statuses recorded) May 28 23:55:27.879: INFO: Container nodereport ready: true, restart count 0 May 28 23:55:27.879: INFO: Container reconcile ready: true, restart count 0 May 28 23:55:27.879: INFO: cmk-webhook-6c9d5f8578-6cpp6 started at 2021-05-28 20:09:16 +0000 UTC (0+1 container statuses recorded) May 28 23:55:27.879: INFO: Container cmk-webhook ready: true, restart count 0 May 28 23:55:27.879: INFO: node-exporter-sjsht started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 28 23:55:27.879: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 28 23:55:27.879: INFO: Container node-exporter ready: true, restart count 0 May 28 23:55:27.879: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-csd8v started at 2021-05-28 20:13:00 +0000 UTC (0+2 container statuses recorded) May 28 23:55:27.879: INFO: Container tas-controller ready: true, restart count 0 May 28 23:55:27.879: INFO: Container tas-extender ready: true, restart count 0 W0528 23:55:27.892826 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 28 23:55:27.926: INFO: Latency metrics for node node2 May 28 23:55:27.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-8708" for this suite. • Failure [602.407 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222 May 28 23:55:27.576: Unexpected error: <*errors.errorString | 0xc0003021f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:241 ------------------------------ {"msg":"FAILED [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage =\u003e should not allow an eviction [Serial]","total":4,"completed":0,"skipped":1082,"failed":2,"failures":["[sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer =\u003e should not allow an eviction [Serial]","[sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage =\u003e should not allow an eviction [Serial]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:312 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 28 23:55:27.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should not update pod when spec was updated and update strategy is OnDelete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:312 May 28 23:55:27.978: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 28 23:55:27.991: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:27.992: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:27.992: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:27.994: INFO: Number of nodes with available pods: 0 May 28 23:55:27.994: INFO: Node node1 is running more than one daemon pod May 28 23:55:28.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:28.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:28.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:29.001: INFO: Number of nodes with available pods: 0 May 28 23:55:29.002: INFO: Node node1 is running more than one daemon pod May 28 23:55:30.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:30.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:30.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:30.002: INFO: Number of nodes with available pods: 0 May 28 23:55:30.002: INFO: Node node1 is running more than one daemon pod May 28 23:55:31.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:31.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:31.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:31.003: INFO: Number of nodes with available pods: 0 May 28 23:55:31.003: INFO: Node node1 is running more than one daemon pod May 28 23:55:32.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:32.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:32.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:32.003: INFO: Number of nodes with available pods: 0 May 28 23:55:32.003: INFO: Node node1 is running more than one daemon pod May 28 23:55:33.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:33.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:33.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:33.006: INFO: Number of nodes with available pods: 0 May 28 23:55:33.007: INFO: Node node1 is running more than one daemon pod May 28 23:55:33.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:33.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:33.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:34.002: INFO: Number of nodes with available pods: 0 May 28 23:55:34.002: INFO: Node node1 is running more than one daemon pod May 28 23:55:35.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:35.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:35.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:35.004: INFO: Number of nodes with available pods: 0 May 28 23:55:35.004: INFO: Node node1 is running more than one daemon pod May 28 23:55:36.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:36.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:36.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:36.004: INFO: Number of nodes with available pods: 0 May 28 23:55:36.004: INFO: Node node1 is running more than one daemon pod May 28 23:55:37.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:37.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:37.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:37.005: INFO: Number of nodes with available pods: 0 May 28 23:55:37.005: INFO: Node node1 is running more than one daemon pod May 28 23:55:37.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:37.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:37.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:38.002: INFO: Number of nodes with available pods: 0 May 28 23:55:38.002: INFO: Node node1 is running more than one daemon pod May 28 23:55:38.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:39.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:39.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:39.002: INFO: Number of nodes with available pods: 0 May 28 23:55:39.002: INFO: Node node1 is running more than one daemon pod May 28 23:55:40.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:40.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:40.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:40.004: INFO: Number of nodes with available pods: 0 May 28 23:55:40.004: INFO: Node node1 is running more than one daemon pod May 28 23:55:41.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:41.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:41.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:41.003: INFO: Number of nodes with available pods: 0 May 28 23:55:41.003: INFO: Node node1 is running more than one daemon pod May 28 23:55:41.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:41.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:41.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:42.003: INFO: Number of nodes with available pods: 0 May 28 23:55:42.003: INFO: Node node1 is running more than one daemon pod May 28 23:55:43.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:43.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:43.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:43.003: INFO: Number of nodes with available pods: 0 May 28 23:55:43.003: INFO: Node node1 is running more than one daemon pod May 28 23:55:43.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:43.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:43.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:44.001: INFO: Number of nodes with available pods: 0 May 28 23:55:44.001: INFO: Node node1 is running more than one daemon pod May 28 23:55:45.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:45.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:45.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:45.004: INFO: Number of nodes with available pods: 0 May 28 23:55:45.004: INFO: Node node1 is running more than one daemon pod May 28 23:55:46.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:46.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:46.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:46.003: INFO: Number of nodes with available pods: 0 May 28 23:55:46.003: INFO: Node node1 is running more than one daemon pod May 28 23:55:47.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:47.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:47.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:47.005: INFO: Number of nodes with available pods: 0 May 28 23:55:47.005: INFO: Node node1 is running more than one daemon pod May 28 23:55:48.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:48.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:48.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:48.004: INFO: Number of nodes with available pods: 0 May 28 23:55:48.004: INFO: Node node1 is running more than one daemon pod May 28 23:55:48.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:48.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:48.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:49.002: INFO: Number of nodes with available pods: 0 May 28 23:55:49.002: INFO: Node node1 is running more than one daemon pod May 28 23:55:50.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:50.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:50.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:50.003: INFO: Number of nodes with available pods: 0 May 28 23:55:50.003: INFO: Node node1 is running more than one daemon pod May 28 23:55:51.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:51.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:51.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:51.003: INFO: Number of nodes with available pods: 0 May 28 23:55:51.003: INFO: Node node1 is running more than one daemon pod May 28 23:55:52.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:52.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:52.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:52.005: INFO: Number of nodes with available pods: 0 May 28 23:55:52.005: INFO: Node node1 is running more than one daemon pod May 28 23:55:53.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:53.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:53.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:53.003: INFO: Number of nodes with available pods: 0 May 28 23:55:53.003: INFO: Node node1 is running more than one daemon pod May 28 23:55:53.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:53.998: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:53.998: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:54.000: INFO: Number of nodes with available pods: 0 May 28 23:55:54.000: INFO: Node node1 is running more than one daemon pod May 28 23:55:55.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:55.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:55.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:55.004: INFO: Number of nodes with available pods: 0 May 28 23:55:55.004: INFO: Node node1 is running more than one daemon pod May 28 23:55:56.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:56.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:56.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:56.002: INFO: Number of nodes with available pods: 0 May 28 23:55:56.002: INFO: Node node1 is running more than one daemon pod May 28 23:55:57.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:57.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:57.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:57.004: INFO: Number of nodes with available pods: 0 May 28 23:55:57.004: INFO: Node node1 is running more than one daemon pod May 28 23:55:57.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:57.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:57.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:58.002: INFO: Number of nodes with available pods: 0 May 28 23:55:58.002: INFO: Node node1 is running more than one daemon pod May 28 23:55:58.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:58.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:58.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:55:59.001: INFO: Number of nodes with available pods: 0 May 28 23:55:59.001: INFO: Node node1 is running more than one daemon pod May 28 23:56:00.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:00.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:00.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:00.002: INFO: Number of nodes with available pods: 0 May 28 23:56:00.002: INFO: Node node1 is running more than one daemon pod May 28 23:56:01.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:01.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:01.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:01.004: INFO: Number of nodes with available pods: 0 May 28 23:56:01.004: INFO: Node node1 is running more than one daemon pod May 28 23:56:02.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:02.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:02.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:02.003: INFO: Number of nodes with available pods: 0 May 28 23:56:02.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:03.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:03.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:03.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:03.003: INFO: Number of nodes with available pods: 0 May 28 23:56:03.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:03.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:03.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:03.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:04.002: INFO: Number of nodes with available pods: 0 May 28 23:56:04.002: INFO: Node node1 is running more than one daemon pod May 28 23:56:04.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:04.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:04.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:05.002: INFO: Number of nodes with available pods: 0 May 28 23:56:05.002: INFO: Node node1 is running more than one daemon pod May 28 23:56:06.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:06.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:06.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:06.003: INFO: Number of nodes with available pods: 0 May 28 23:56:06.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:07.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:07.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:07.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:07.004: INFO: Number of nodes with available pods: 0 May 28 23:56:07.004: INFO: Node node1 is running more than one daemon pod May 28 23:56:07.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:07.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:07.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:08.003: INFO: Number of nodes with available pods: 0 May 28 23:56:08.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:08.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:08.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:08.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:09.002: INFO: Number of nodes with available pods: 0 May 28 23:56:09.002: INFO: Node node1 is running more than one daemon pod May 28 23:56:09.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:09.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:09.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:10.002: INFO: Number of nodes with available pods: 0 May 28 23:56:10.002: INFO: Node node1 is running more than one daemon pod May 28 23:56:11.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:11.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:11.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:11.003: INFO: Number of nodes with available pods: 0 May 28 23:56:11.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:12.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:12.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:12.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:12.005: INFO: Number of nodes with available pods: 0 May 28 23:56:12.005: INFO: Node node1 is running more than one daemon pod May 28 23:56:13.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:13.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:13.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:13.004: INFO: Number of nodes with available pods: 0 May 28 23:56:13.004: INFO: Node node1 is running more than one daemon pod May 28 23:56:13.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:14.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:14.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:14.003: INFO: Number of nodes with available pods: 0 May 28 23:56:14.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:15.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:15.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:15.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:15.004: INFO: Number of nodes with available pods: 0 May 28 23:56:15.004: INFO: Node node1 is running more than one daemon pod May 28 23:56:16.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:16.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:16.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:16.003: INFO: Number of nodes with available pods: 0 May 28 23:56:16.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:17.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:17.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:17.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:17.003: INFO: Number of nodes with available pods: 0 May 28 23:56:17.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:18.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:18.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:18.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:18.003: INFO: Number of nodes with available pods: 0 May 28 23:56:18.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:19.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:19.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:19.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:19.004: INFO: Number of nodes with available pods: 0 May 28 23:56:19.004: INFO: Node node1 is running more than one daemon pod May 28 23:56:20.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:20.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:20.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:20.003: INFO: Number of nodes with available pods: 0 May 28 23:56:20.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:21.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:21.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:21.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:21.003: INFO: Number of nodes with available pods: 0 May 28 23:56:21.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:22.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:22.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:22.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:22.004: INFO: Number of nodes with available pods: 0 May 28 23:56:22.004: INFO: Node node1 is running more than one daemon pod May 28 23:56:23.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:23.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:23.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:23.005: INFO: Number of nodes with available pods: 0 May 28 23:56:23.005: INFO: Node node1 is running more than one daemon pod May 28 23:56:23.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:23.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:23.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:24.002: INFO: Number of nodes with available pods: 0 May 28 23:56:24.002: INFO: Node node1 is running more than one daemon pod May 28 23:56:25.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:25.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:25.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:25.005: INFO: Number of nodes with available pods: 0 May 28 23:56:25.005: INFO: Node node1 is running more than one daemon pod May 28 23:56:25.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:26.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:26.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:26.002: INFO: Number of nodes with available pods: 0 May 28 23:56:26.002: INFO: Node node1 is running more than one daemon pod May 28 23:56:27.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:27.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:27.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:27.005: INFO: Number of nodes with available pods: 0 May 28 23:56:27.005: INFO: Node node1 is running more than one daemon pod May 28 23:56:28.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:28.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:28.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:28.003: INFO: Number of nodes with available pods: 0 May 28 23:56:28.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:28.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:28.998: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:28.998: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:29.001: INFO: Number of nodes with available pods: 0 May 28 23:56:29.001: INFO: Node node1 is running more than one daemon pod May 28 23:56:30.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:30.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:30.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:30.003: INFO: Number of nodes with available pods: 0 May 28 23:56:30.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:31.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:31.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:31.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:31.003: INFO: Number of nodes with available pods: 0 May 28 23:56:31.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:32.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:32.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:32.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:32.004: INFO: Number of nodes with available pods: 0 May 28 23:56:32.004: INFO: Node node1 is running more than one daemon pod May 28 23:56:33.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:33.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:33.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:33.005: INFO: Number of nodes with available pods: 0 May 28 23:56:33.005: INFO: Node node1 is running more than one daemon pod May 28 23:56:33.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:33.998: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:33.998: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:34.001: INFO: Number of nodes with available pods: 0 May 28 23:56:34.001: INFO: Node node1 is running more than one daemon pod May 28 23:56:35.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:35.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:35.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:35.005: INFO: Number of nodes with available pods: 0 May 28 23:56:35.005: INFO: Node node1 is running more than one daemon pod May 28 23:56:36.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:36.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:36.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:36.003: INFO: Number of nodes with available pods: 0 May 28 23:56:36.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:36.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:36.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:36.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:37.001: INFO: Number of nodes with available pods: 0 May 28 23:56:37.001: INFO: Node node1 is running more than one daemon pod May 28 23:56:38.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:38.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:38.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:38.004: INFO: Number of nodes with available pods: 0 May 28 23:56:38.004: INFO: Node node1 is running more than one daemon pod May 28 23:56:38.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:38.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:38.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:39.001: INFO: Number of nodes with available pods: 0 May 28 23:56:39.001: INFO: Node node1 is running more than one daemon pod May 28 23:56:39.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:40.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:40.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:40.003: INFO: Number of nodes with available pods: 0 May 28 23:56:40.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:41.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:41.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:41.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:41.004: INFO: Number of nodes with available pods: 0 May 28 23:56:41.004: INFO: Node node1 is running more than one daemon pod May 28 23:56:42.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:42.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:42.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:42.004: INFO: Number of nodes with available pods: 0 May 28 23:56:42.004: INFO: Node node1 is running more than one daemon pod May 28 23:56:43.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:43.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:43.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:43.005: INFO: Number of nodes with available pods: 0 May 28 23:56:43.005: INFO: Node node1 is running more than one daemon pod May 28 23:56:44.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:44.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:44.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:44.007: INFO: Number of nodes with available pods: 0 May 28 23:56:44.007: INFO: Node node1 is running more than one daemon pod May 28 23:56:44.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:44.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:44.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:45.002: INFO: Number of nodes with available pods: 0 May 28 23:56:45.002: INFO: Node node1 is running more than one daemon pod May 28 23:56:46.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:46.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:46.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:46.003: INFO: Number of nodes with available pods: 0 May 28 23:56:46.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:46.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:47.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:47.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:47.002: INFO: Number of nodes with available pods: 0 May 28 23:56:47.002: INFO: Node node1 is running more than one daemon pod May 28 23:56:47.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:47.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:47.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:48.002: INFO: Number of nodes with available pods: 0 May 28 23:56:48.002: INFO: Node node1 is running more than one daemon pod May 28 23:56:49.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:49.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:49.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:49.003: INFO: Number of nodes with available pods: 0 May 28 23:56:49.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:49.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:49.998: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:49.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:50.001: INFO: Number of nodes with available pods: 0 May 28 23:56:50.001: INFO: Node node1 is running more than one daemon pod May 28 23:56:50.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:50.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:50.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:51.001: INFO: Number of nodes with available pods: 0 May 28 23:56:51.001: INFO: Node node1 is running more than one daemon pod May 28 23:56:52.003: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:52.003: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:52.003: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:52.007: INFO: Number of nodes with available pods: 0 May 28 23:56:52.007: INFO: Node node1 is running more than one daemon pod May 28 23:56:53.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:53.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:53.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:53.003: INFO: Number of nodes with available pods: 0 May 28 23:56:53.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:53.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:53.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:53.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:54.002: INFO: Number of nodes with available pods: 0 May 28 23:56:54.002: INFO: Node node1 is running more than one daemon pod May 28 23:56:55.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:55.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:55.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:55.005: INFO: Number of nodes with available pods: 0 May 28 23:56:55.005: INFO: Node node1 is running more than one daemon pod May 28 23:56:56.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:56.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:56.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:56.003: INFO: Number of nodes with available pods: 0 May 28 23:56:56.003: INFO: Node node1 is running more than one daemon pod May 28 23:56:57.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:57.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:57.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:57.005: INFO: Number of nodes with available pods: 0 May 28 23:56:57.005: INFO: Node node1 is running more than one daemon pod May 28 23:56:57.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:57.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:57.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:58.002: INFO: Number of nodes with available pods: 0 May 28 23:56:58.002: INFO: Node node1 is running more than one daemon pod May 28 23:56:58.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:58.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:58.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:56:59.002: INFO: Number of nodes with available pods: 0 May 28 23:56:59.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:00.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:00.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:00.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:00.004: INFO: Number of nodes with available pods: 0 May 28 23:57:00.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:01.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:01.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:01.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:01.004: INFO: Number of nodes with available pods: 0 May 28 23:57:01.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:02.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:02.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:02.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:02.003: INFO: Number of nodes with available pods: 0 May 28 23:57:02.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:03.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:03.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:03.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:03.004: INFO: Number of nodes with available pods: 0 May 28 23:57:03.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:03.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:03.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:03.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:04.002: INFO: Number of nodes with available pods: 0 May 28 23:57:04.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:05.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:05.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:05.003: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:05.006: INFO: Number of nodes with available pods: 0 May 28 23:57:05.006: INFO: Node node1 is running more than one daemon pod May 28 23:57:06.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:06.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:06.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:06.003: INFO: Number of nodes with available pods: 0 May 28 23:57:06.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:07.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:07.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:07.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:07.004: INFO: Number of nodes with available pods: 0 May 28 23:57:07.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:07.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:07.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:07.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:08.002: INFO: Number of nodes with available pods: 0 May 28 23:57:08.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:09.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:09.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:09.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:09.003: INFO: Number of nodes with available pods: 0 May 28 23:57:09.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:09.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:09.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:09.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:10.002: INFO: Number of nodes with available pods: 0 May 28 23:57:10.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:11.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:11.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:11.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:11.004: INFO: Number of nodes with available pods: 0 May 28 23:57:11.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:11.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:11.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:11.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:12.002: INFO: Number of nodes with available pods: 0 May 28 23:57:12.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:12.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:12.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:12.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:13.002: INFO: Number of nodes with available pods: 0 May 28 23:57:13.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:13.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:13.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:13.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:14.002: INFO: Number of nodes with available pods: 0 May 28 23:57:14.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:15.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:15.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:15.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:15.004: INFO: Number of nodes with available pods: 0 May 28 23:57:15.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:16.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:16.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:16.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:16.004: INFO: Number of nodes with available pods: 0 May 28 23:57:16.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:17.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:17.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:17.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:17.003: INFO: Number of nodes with available pods: 0 May 28 23:57:17.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:18.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:18.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:18.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:18.002: INFO: Number of nodes with available pods: 0 May 28 23:57:18.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:18.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:18.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:18.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:19.002: INFO: Number of nodes with available pods: 0 May 28 23:57:19.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:20.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:20.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:20.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:20.004: INFO: Number of nodes with available pods: 0 May 28 23:57:20.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:21.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:21.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:21.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:21.002: INFO: Number of nodes with available pods: 0 May 28 23:57:21.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:22.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:22.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:22.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:22.004: INFO: Number of nodes with available pods: 0 May 28 23:57:22.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:23.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:23.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:23.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:23.003: INFO: Number of nodes with available pods: 0 May 28 23:57:23.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:23.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:23.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:23.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:24.002: INFO: Number of nodes with available pods: 0 May 28 23:57:24.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:24.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:24.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:24.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:25.002: INFO: Number of nodes with available pods: 0 May 28 23:57:25.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:26.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:26.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:26.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:26.002: INFO: Number of nodes with available pods: 0 May 28 23:57:26.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:27.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:27.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:27.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:27.004: INFO: Number of nodes with available pods: 0 May 28 23:57:27.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:28.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:28.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:28.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:28.003: INFO: Number of nodes with available pods: 0 May 28 23:57:28.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:29.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:29.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:29.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:29.003: INFO: Number of nodes with available pods: 0 May 28 23:57:29.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:30.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:30.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:30.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:30.003: INFO: Number of nodes with available pods: 0 May 28 23:57:30.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:30.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:30.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:30.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:31.002: INFO: Number of nodes with available pods: 0 May 28 23:57:31.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:32.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:32.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:32.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:32.002: INFO: Number of nodes with available pods: 0 May 28 23:57:32.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:33.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:33.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:33.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:33.004: INFO: Number of nodes with available pods: 0 May 28 23:57:33.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:33.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:33.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:33.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:34.002: INFO: Number of nodes with available pods: 0 May 28 23:57:34.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:35.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:35.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:35.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:35.005: INFO: Number of nodes with available pods: 0 May 28 23:57:35.006: INFO: Node node1 is running more than one daemon pod May 28 23:57:36.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:36.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:36.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:36.004: INFO: Number of nodes with available pods: 0 May 28 23:57:36.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:37.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:37.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:37.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:37.004: INFO: Number of nodes with available pods: 0 May 28 23:57:37.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:38.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:38.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:38.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:38.003: INFO: Number of nodes with available pods: 0 May 28 23:57:38.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:38.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:38.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:38.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:39.002: INFO: Number of nodes with available pods: 0 May 28 23:57:39.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:39.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:39.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:39.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:40.002: INFO: Number of nodes with available pods: 0 May 28 23:57:40.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:41.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:41.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:41.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:41.003: INFO: Number of nodes with available pods: 0 May 28 23:57:41.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:42.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:42.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:42.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:42.003: INFO: Number of nodes with available pods: 0 May 28 23:57:42.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:43.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:43.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:43.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:43.003: INFO: Number of nodes with available pods: 0 May 28 23:57:43.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:43.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:43.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:43.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:44.001: INFO: Number of nodes with available pods: 0 May 28 23:57:44.001: INFO: Node node1 is running more than one daemon pod May 28 23:57:45.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:45.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:45.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:45.004: INFO: Number of nodes with available pods: 0 May 28 23:57:45.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:46.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:46.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:46.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:46.004: INFO: Number of nodes with available pods: 0 May 28 23:57:46.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:47.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:47.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:47.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:47.003: INFO: Number of nodes with available pods: 0 May 28 23:57:47.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:47.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:48.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:48.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:48.003: INFO: Number of nodes with available pods: 0 May 28 23:57:48.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:48.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:48.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:48.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:49.002: INFO: Number of nodes with available pods: 0 May 28 23:57:49.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:50.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:50.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:50.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:50.003: INFO: Number of nodes with available pods: 0 May 28 23:57:50.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:51.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:51.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:51.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:51.004: INFO: Number of nodes with available pods: 0 May 28 23:57:51.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:52.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:52.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:52.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:52.003: INFO: Number of nodes with available pods: 0 May 28 23:57:52.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:53.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:53.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:53.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:53.003: INFO: Number of nodes with available pods: 0 May 28 23:57:53.003: INFO: Node node1 is running more than one daemon pod May 28 23:57:53.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:53.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:53.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:54.002: INFO: Number of nodes with available pods: 0 May 28 23:57:54.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:55.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:55.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:55.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:55.004: INFO: Number of nodes with available pods: 0 May 28 23:57:55.004: INFO: Node node1 is running more than one daemon pod May 28 23:57:56.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:56.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:56.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:56.004: INFO: Number of nodes with available pods: 0 May 28 23:57:56.005: INFO: Node node1 is running more than one daemon pod May 28 23:57:57.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:57.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:57.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:57.005: INFO: Number of nodes with available pods: 0 May 28 23:57:57.005: INFO: Node node1 is running more than one daemon pod May 28 23:57:57.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:57.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:57.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:58.002: INFO: Number of nodes with available pods: 0 May 28 23:57:58.002: INFO: Node node1 is running more than one daemon pod May 28 23:57:58.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:58.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:58.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:57:59.002: INFO: Number of nodes with available pods: 0 May 28 23:57:59.002: INFO: Node node1 is running more than one daemon pod May 28 23:58:00.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:00.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:00.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:00.003: INFO: Number of nodes with available pods: 0 May 28 23:58:00.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:00.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:00.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:00.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:01.002: INFO: Number of nodes with available pods: 0 May 28 23:58:01.002: INFO: Node node1 is running more than one daemon pod May 28 23:58:01.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:01.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:02.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:02.002: INFO: Number of nodes with available pods: 0 May 28 23:58:02.002: INFO: Node node1 is running more than one daemon pod May 28 23:58:03.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:03.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:03.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:03.005: INFO: Number of nodes with available pods: 0 May 28 23:58:03.005: INFO: Node node1 is running more than one daemon pod May 28 23:58:03.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:03.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:03.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:04.002: INFO: Number of nodes with available pods: 0 May 28 23:58:04.002: INFO: Node node1 is running more than one daemon pod May 28 23:58:05.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:05.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:05.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:05.004: INFO: Number of nodes with available pods: 0 May 28 23:58:05.004: INFO: Node node1 is running more than one daemon pod May 28 23:58:06.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:06.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:06.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:06.005: INFO: Number of nodes with available pods: 0 May 28 23:58:06.005: INFO: Node node1 is running more than one daemon pod May 28 23:58:06.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:06.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:06.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:07.002: INFO: Number of nodes with available pods: 0 May 28 23:58:07.002: INFO: Node node1 is running more than one daemon pod May 28 23:58:08.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:08.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:08.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:08.004: INFO: Number of nodes with available pods: 0 May 28 23:58:08.004: INFO: Node node1 is running more than one daemon pod May 28 23:58:08.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:08.998: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:08.998: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:09.001: INFO: Number of nodes with available pods: 0 May 28 23:58:09.001: INFO: Node node1 is running more than one daemon pod May 28 23:58:10.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:10.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:10.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:10.003: INFO: Number of nodes with available pods: 0 May 28 23:58:10.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:11.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:11.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:11.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:11.004: INFO: Number of nodes with available pods: 0 May 28 23:58:11.004: INFO: Node node1 is running more than one daemon pod May 28 23:58:12.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:12.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:12.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:12.003: INFO: Number of nodes with available pods: 0 May 28 23:58:12.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:13.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:13.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:13.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:13.003: INFO: Number of nodes with available pods: 0 May 28 23:58:13.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:13.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:13.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:14.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:14.003: INFO: Number of nodes with available pods: 0 May 28 23:58:14.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:15.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:15.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:15.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:15.004: INFO: Number of nodes with available pods: 0 May 28 23:58:15.004: INFO: Node node1 is running more than one daemon pod May 28 23:58:15.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:15.998: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:15.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:16.001: INFO: Number of nodes with available pods: 0 May 28 23:58:16.001: INFO: Node node1 is running more than one daemon pod May 28 23:58:17.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:17.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:17.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:17.004: INFO: Number of nodes with available pods: 0 May 28 23:58:17.004: INFO: Node node1 is running more than one daemon pod May 28 23:58:18.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:18.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:18.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:18.004: INFO: Number of nodes with available pods: 0 May 28 23:58:18.004: INFO: Node node1 is running more than one daemon pod May 28 23:58:18.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:18.998: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:18.998: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:19.000: INFO: Number of nodes with available pods: 0 May 28 23:58:19.000: INFO: Node node1 is running more than one daemon pod May 28 23:58:20.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:20.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:20.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:20.004: INFO: Number of nodes with available pods: 0 May 28 23:58:20.004: INFO: Node node1 is running more than one daemon pod May 28 23:58:21.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:21.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:21.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:21.004: INFO: Number of nodes with available pods: 0 May 28 23:58:21.004: INFO: Node node1 is running more than one daemon pod May 28 23:58:21.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:21.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:21.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:22.002: INFO: Number of nodes with available pods: 0 May 28 23:58:22.002: INFO: Node node1 is running more than one daemon pod May 28 23:58:23.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:23.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:23.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:23.004: INFO: Number of nodes with available pods: 0 May 28 23:58:23.004: INFO: Node node1 is running more than one daemon pod May 28 23:58:23.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:23.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:23.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:24.002: INFO: Number of nodes with available pods: 0 May 28 23:58:24.002: INFO: Node node1 is running more than one daemon pod May 28 23:58:24.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:24.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:24.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:25.002: INFO: Number of nodes with available pods: 0 May 28 23:58:25.002: INFO: Node node1 is running more than one daemon pod May 28 23:58:26.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:26.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:26.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:26.003: INFO: Number of nodes with available pods: 0 May 28 23:58:26.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:27.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:27.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:27.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:27.002: INFO: Number of nodes with available pods: 0 May 28 23:58:27.002: INFO: Node node1 is running more than one daemon pod May 28 23:58:28.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:28.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:28.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:28.002: INFO: Number of nodes with available pods: 0 May 28 23:58:28.002: INFO: Node node1 is running more than one daemon pod May 28 23:58:29.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:29.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:29.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:29.003: INFO: Number of nodes with available pods: 0 May 28 23:58:29.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:30.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:30.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:30.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:30.003: INFO: Number of nodes with available pods: 0 May 28 23:58:30.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:31.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:31.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:31.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:31.003: INFO: Number of nodes with available pods: 0 May 28 23:58:31.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:32.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:32.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:32.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:32.003: INFO: Number of nodes with available pods: 0 May 28 23:58:32.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:32.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:32.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:32.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:33.001: INFO: Number of nodes with available pods: 0 May 28 23:58:33.001: INFO: Node node1 is running more than one daemon pod May 28 23:58:34.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:34.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:34.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:34.003: INFO: Number of nodes with available pods: 0 May 28 23:58:34.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:35.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:35.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:35.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:35.005: INFO: Number of nodes with available pods: 0 May 28 23:58:35.005: INFO: Node node1 is running more than one daemon pod May 28 23:58:35.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:35.998: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:35.998: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:36.001: INFO: Number of nodes with available pods: 0 May 28 23:58:36.001: INFO: Node node1 is running more than one daemon pod May 28 23:58:37.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:37.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:37.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:37.003: INFO: Number of nodes with available pods: 0 May 28 23:58:37.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:38.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:38.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:38.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:38.005: INFO: Number of nodes with available pods: 0 May 28 23:58:38.005: INFO: Node node1 is running more than one daemon pod May 28 23:58:38.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:38.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:38.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:39.002: INFO: Number of nodes with available pods: 0 May 28 23:58:39.002: INFO: Node node1 is running more than one daemon pod May 28 23:58:40.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:40.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:40.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:40.002: INFO: Number of nodes with available pods: 0 May 28 23:58:40.002: INFO: Node node1 is running more than one daemon pod May 28 23:58:41.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:41.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:41.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:41.003: INFO: Number of nodes with available pods: 0 May 28 23:58:41.004: INFO: Node node1 is running more than one daemon pod May 28 23:58:42.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:42.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:42.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:42.003: INFO: Number of nodes with available pods: 0 May 28 23:58:42.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:42.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:42.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:42.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:43.002: INFO: Number of nodes with available pods: 0 May 28 23:58:43.002: INFO: Node node1 is running more than one daemon pod May 28 23:58:44.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:44.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:44.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:44.003: INFO: Number of nodes with available pods: 0 May 28 23:58:44.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:45.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:45.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:45.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:45.003: INFO: Number of nodes with available pods: 0 May 28 23:58:45.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:46.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:46.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:46.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:46.004: INFO: Number of nodes with available pods: 0 May 28 23:58:46.004: INFO: Node node1 is running more than one daemon pod May 28 23:58:47.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:47.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:47.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:47.003: INFO: Number of nodes with available pods: 0 May 28 23:58:47.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:48.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:48.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:48.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:48.004: INFO: Number of nodes with available pods: 0 May 28 23:58:48.004: INFO: Node node1 is running more than one daemon pod May 28 23:58:49.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:49.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:49.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:49.003: INFO: Number of nodes with available pods: 0 May 28 23:58:49.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:50.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:50.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:50.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:50.003: INFO: Number of nodes with available pods: 0 May 28 23:58:50.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:51.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:51.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:51.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:51.003: INFO: Number of nodes with available pods: 0 May 28 23:58:51.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:52.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:52.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:52.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:52.003: INFO: Number of nodes with available pods: 0 May 28 23:58:52.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:53.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:53.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:53.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:53.005: INFO: Number of nodes with available pods: 0 May 28 23:58:53.005: INFO: Node node1 is running more than one daemon pod May 28 23:58:53.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:53.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:53.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:54.002: INFO: Number of nodes with available pods: 0 May 28 23:58:54.002: INFO: Node node1 is running more than one daemon pod May 28 23:58:54.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:54.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:54.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:55.002: INFO: Number of nodes with available pods: 0 May 28 23:58:55.002: INFO: Node node1 is running more than one daemon pod May 28 23:58:56.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:56.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:56.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:56.003: INFO: Number of nodes with available pods: 0 May 28 23:58:56.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:57.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:57.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:57.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:57.004: INFO: Number of nodes with available pods: 0 May 28 23:58:57.004: INFO: Node node1 is running more than one daemon pod May 28 23:58:57.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:57.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:57.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:58.003: INFO: Number of nodes with available pods: 0 May 28 23:58:58.003: INFO: Node node1 is running more than one daemon pod May 28 23:58:58.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:58.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:58.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:58:59.001: INFO: Number of nodes with available pods: 0 May 28 23:58:59.001: INFO: Node node1 is running more than one daemon pod May 28 23:59:00.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:00.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:00.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:00.004: INFO: Number of nodes with available pods: 0 May 28 23:59:00.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:01.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:01.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:01.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:01.003: INFO: Number of nodes with available pods: 0 May 28 23:59:01.003: INFO: Node node1 is running more than one daemon pod May 28 23:59:02.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:02.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:02.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:02.002: INFO: Number of nodes with available pods: 0 May 28 23:59:02.002: INFO: Node node1 is running more than one daemon pod May 28 23:59:03.003: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:03.003: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:03.003: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:03.006: INFO: Number of nodes with available pods: 0 May 28 23:59:03.006: INFO: Node node1 is running more than one daemon pod May 28 23:59:03.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:03.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:03.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:04.002: INFO: Number of nodes with available pods: 0 May 28 23:59:04.002: INFO: Node node1 is running more than one daemon pod May 28 23:59:05.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:05.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:05.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:05.004: INFO: Number of nodes with available pods: 0 May 28 23:59:05.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:06.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:06.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:06.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:06.003: INFO: Number of nodes with available pods: 0 May 28 23:59:06.003: INFO: Node node1 is running more than one daemon pod May 28 23:59:07.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:07.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:07.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:07.004: INFO: Number of nodes with available pods: 0 May 28 23:59:07.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:08.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:08.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:08.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:08.005: INFO: Number of nodes with available pods: 0 May 28 23:59:08.005: INFO: Node node1 is running more than one daemon pod May 28 23:59:08.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:08.998: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:08.998: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:09.001: INFO: Number of nodes with available pods: 0 May 28 23:59:09.001: INFO: Node node1 is running more than one daemon pod May 28 23:59:10.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:10.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:10.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:10.005: INFO: Number of nodes with available pods: 0 May 28 23:59:10.005: INFO: Node node1 is running more than one daemon pod May 28 23:59:11.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:11.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:11.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:11.004: INFO: Number of nodes with available pods: 0 May 28 23:59:11.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:12.003: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:12.003: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:12.003: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:12.007: INFO: Number of nodes with available pods: 0 May 28 23:59:12.007: INFO: Node node1 is running more than one daemon pod May 28 23:59:12.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:12.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:12.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:13.002: INFO: Number of nodes with available pods: 0 May 28 23:59:13.002: INFO: Node node1 is running more than one daemon pod May 28 23:59:14.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:14.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:14.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:14.003: INFO: Number of nodes with available pods: 0 May 28 23:59:14.003: INFO: Node node1 is running more than one daemon pod May 28 23:59:15.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:15.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:15.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:15.004: INFO: Number of nodes with available pods: 0 May 28 23:59:15.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:16.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:16.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:16.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:16.003: INFO: Number of nodes with available pods: 0 May 28 23:59:16.003: INFO: Node node1 is running more than one daemon pod May 28 23:59:17.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:17.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:17.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:17.004: INFO: Number of nodes with available pods: 0 May 28 23:59:17.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:18.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:18.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:18.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:18.003: INFO: Number of nodes with available pods: 0 May 28 23:59:18.003: INFO: Node node1 is running more than one daemon pod May 28 23:59:18.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:18.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:18.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:19.003: INFO: Number of nodes with available pods: 0 May 28 23:59:19.003: INFO: Node node1 is running more than one daemon pod May 28 23:59:20.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:20.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:20.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:20.004: INFO: Number of nodes with available pods: 0 May 28 23:59:20.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:21.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:21.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:21.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:21.004: INFO: Number of nodes with available pods: 0 May 28 23:59:21.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:22.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:22.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:22.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:22.003: INFO: Number of nodes with available pods: 0 May 28 23:59:22.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:23.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:23.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:23.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:23.004: INFO: Number of nodes with available pods: 0 May 28 23:59:23.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:23.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:23.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:23.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:24.001: INFO: Number of nodes with available pods: 0 May 28 23:59:24.001: INFO: Node node1 is running more than one daemon pod May 28 23:59:25.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:25.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:25.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:25.004: INFO: Number of nodes with available pods: 0 May 28 23:59:25.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:25.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:25.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:26.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:26.002: INFO: Number of nodes with available pods: 0 May 28 23:59:26.002: INFO: Node node1 is running more than one daemon pod May 28 23:59:27.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:27.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:27.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:27.004: INFO: Number of nodes with available pods: 0 May 28 23:59:27.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:27.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:27.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:28.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:28.002: INFO: Number of nodes with available pods: 0 May 28 23:59:28.002: INFO: Node node1 is running more than one daemon pod May 28 23:59:28.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:28.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:28.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:29.002: INFO: Number of nodes with available pods: 0 May 28 23:59:29.002: INFO: Node node1 is running more than one daemon pod May 28 23:59:29.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:29.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:29.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:30.002: INFO: Number of nodes with available pods: 0 May 28 23:59:30.002: INFO: Node node1 is running more than one daemon pod May 28 23:59:30.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:30.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:30.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:31.002: INFO: Number of nodes with available pods: 0 May 28 23:59:31.002: INFO: Node node1 is running more than one daemon pod May 28 23:59:32.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:32.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:32.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:32.004: INFO: Number of nodes with available pods: 0 May 28 23:59:32.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:32.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:32.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:32.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:33.002: INFO: Number of nodes with available pods: 0 May 28 23:59:33.002: INFO: Node node1 is running more than one daemon pod May 28 23:59:33.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:33.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:33.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:34.002: INFO: Number of nodes with available pods: 0 May 28 23:59:34.002: INFO: Node node1 is running more than one daemon pod May 28 23:59:35.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:35.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:35.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:35.004: INFO: Number of nodes with available pods: 0 May 28 23:59:35.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:36.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:36.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:36.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:36.003: INFO: Number of nodes with available pods: 0 May 28 23:59:36.003: INFO: Node node1 is running more than one daemon pod May 28 23:59:37.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:37.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:37.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:37.004: INFO: Number of nodes with available pods: 0 May 28 23:59:37.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:37.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:37.998: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:37.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:38.001: INFO: Number of nodes with available pods: 0 May 28 23:59:38.001: INFO: Node node1 is running more than one daemon pod May 28 23:59:38.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:38.998: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:38.998: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:39.001: INFO: Number of nodes with available pods: 0 May 28 23:59:39.001: INFO: Node node1 is running more than one daemon pod May 28 23:59:40.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:40.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:40.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:40.005: INFO: Number of nodes with available pods: 0 May 28 23:59:40.005: INFO: Node node1 is running more than one daemon pod May 28 23:59:41.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:41.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:41.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:41.004: INFO: Number of nodes with available pods: 0 May 28 23:59:41.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:41.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:41.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:41.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:42.001: INFO: Number of nodes with available pods: 0 May 28 23:59:42.001: INFO: Node node1 is running more than one daemon pod May 28 23:59:43.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:43.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:43.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:43.005: INFO: Number of nodes with available pods: 0 May 28 23:59:43.005: INFO: Node node1 is running more than one daemon pod May 28 23:59:43.998: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:43.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:43.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:44.002: INFO: Number of nodes with available pods: 0 May 28 23:59:44.002: INFO: Node node1 is running more than one daemon pod May 28 23:59:45.002: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:45.002: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:45.002: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:45.004: INFO: Number of nodes with available pods: 0 May 28 23:59:45.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:46.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:46.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:46.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:46.003: INFO: Number of nodes with available pods: 0 May 28 23:59:46.003: INFO: Node node1 is running more than one daemon pod May 28 23:59:47.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:47.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:47.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:47.004: INFO: Number of nodes with available pods: 0 May 28 23:59:47.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:48.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:48.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:48.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:48.004: INFO: Number of nodes with available pods: 0 May 28 23:59:48.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:48.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:48.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:48.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:49.002: INFO: Number of nodes with available pods: 0 May 28 23:59:49.002: INFO: Node node1 is running more than one daemon pod May 28 23:59:49.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:49.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:49.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:50.002: INFO: Number of nodes with available pods: 0 May 28 23:59:50.002: INFO: Node node1 is running more than one daemon pod May 28 23:59:51.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:51.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:51.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:51.003: INFO: Number of nodes with available pods: 0 May 28 23:59:51.003: INFO: Node node1 is running more than one daemon pod May 28 23:59:51.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:51.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:52.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:52.002: INFO: Number of nodes with available pods: 0 May 28 23:59:52.002: INFO: Node node1 is running more than one daemon pod May 28 23:59:53.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:53.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:53.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:53.003: INFO: Number of nodes with available pods: 0 May 28 23:59:53.003: INFO: Node node1 is running more than one daemon pod May 28 23:59:53.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:53.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:53.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:54.001: INFO: Number of nodes with available pods: 0 May 28 23:59:54.001: INFO: Node node1 is running more than one daemon pod May 28 23:59:55.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:55.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:55.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:55.004: INFO: Number of nodes with available pods: 0 May 28 23:59:55.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:56.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:56.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:56.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:56.004: INFO: Number of nodes with available pods: 0 May 28 23:59:56.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:56.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:56.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:56.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:57.002: INFO: Number of nodes with available pods: 0 May 28 23:59:57.002: INFO: Node node1 is running more than one daemon pod May 28 23:59:57.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:57.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:57.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:58.001: INFO: Number of nodes with available pods: 0 May 28 23:59:58.001: INFO: Node node1 is running more than one daemon pod May 28 23:59:59.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:59.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:59.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:59.004: INFO: Number of nodes with available pods: 0 May 28 23:59:59.004: INFO: Node node1 is running more than one daemon pod May 28 23:59:59.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:59.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 23:59:59.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:00.002: INFO: Number of nodes with available pods: 0 May 29 00:00:00.002: INFO: Node node1 is running more than one daemon pod May 29 00:00:01.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:01.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:01.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:01.002: INFO: Number of nodes with available pods: 0 May 29 00:00:01.002: INFO: Node node1 is running more than one daemon pod May 29 00:00:01.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:01.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:01.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:02.002: INFO: Number of nodes with available pods: 0 May 29 00:00:02.003: INFO: Node node1 is running more than one daemon pod May 29 00:00:03.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:03.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:03.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:03.002: INFO: Number of nodes with available pods: 0 May 29 00:00:03.002: INFO: Node node1 is running more than one daemon pod May 29 00:00:03.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:03.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:03.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:04.002: INFO: Number of nodes with available pods: 0 May 29 00:00:04.002: INFO: Node node1 is running more than one daemon pod May 29 00:00:05.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:05.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:05.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:05.003: INFO: Number of nodes with available pods: 0 May 29 00:00:05.003: INFO: Node node1 is running more than one daemon pod May 29 00:00:06.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:06.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:06.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:06.002: INFO: Number of nodes with available pods: 0 May 29 00:00:06.002: INFO: Node node1 is running more than one daemon pod May 29 00:00:07.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:07.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:07.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:07.003: INFO: Number of nodes with available pods: 0 May 29 00:00:07.003: INFO: Node node1 is running more than one daemon pod May 29 00:00:07.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:08.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:08.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:08.002: INFO: Number of nodes with available pods: 0 May 29 00:00:08.002: INFO: Node node1 is running more than one daemon pod May 29 00:00:08.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:08.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:08.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:09.001: INFO: Number of nodes with available pods: 0 May 29 00:00:09.001: INFO: Node node1 is running more than one daemon pod May 29 00:00:10.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:10.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:10.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:10.005: INFO: Number of nodes with available pods: 0 May 29 00:00:10.005: INFO: Node node1 is running more than one daemon pod May 29 00:00:10.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:10.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:10.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:11.001: INFO: Number of nodes with available pods: 0 May 29 00:00:11.001: INFO: Node node1 is running more than one daemon pod May 29 00:00:11.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:11.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:11.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:12.002: INFO: Number of nodes with available pods: 0 May 29 00:00:12.002: INFO: Node node1 is running more than one daemon pod May 29 00:00:12.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:12.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:12.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:13.002: INFO: Number of nodes with available pods: 0 May 29 00:00:13.002: INFO: Node node1 is running more than one daemon pod May 29 00:00:13.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:13.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:13.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:14.003: INFO: Number of nodes with available pods: 0 May 29 00:00:14.003: INFO: Node node1 is running more than one daemon pod May 29 00:00:15.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:15.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:15.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:15.004: INFO: Number of nodes with available pods: 0 May 29 00:00:15.004: INFO: Node node1 is running more than one daemon pod May 29 00:00:16.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:16.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:16.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:16.003: INFO: Number of nodes with available pods: 0 May 29 00:00:16.003: INFO: Node node1 is running more than one daemon pod May 29 00:00:16.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:16.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:16.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:17.002: INFO: Number of nodes with available pods: 0 May 29 00:00:17.002: INFO: Node node1 is running more than one daemon pod May 29 00:00:17.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:17.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:17.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:18.002: INFO: Number of nodes with available pods: 0 May 29 00:00:18.002: INFO: Node node1 is running more than one daemon pod May 29 00:00:18.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:18.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:18.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:19.002: INFO: Number of nodes with available pods: 0 May 29 00:00:19.002: INFO: Node node1 is running more than one daemon pod May 29 00:00:19.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:19.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:19.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:20.002: INFO: Number of nodes with available pods: 0 May 29 00:00:20.002: INFO: Node node1 is running more than one daemon pod May 29 00:00:20.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:20.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:20.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:21.002: INFO: Number of nodes with available pods: 0 May 29 00:00:21.002: INFO: Node node1 is running more than one daemon pod May 29 00:00:22.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:22.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:22.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:22.003: INFO: Number of nodes with available pods: 0 May 29 00:00:22.003: INFO: Node node1 is running more than one daemon pod May 29 00:00:23.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:23.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:23.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:23.003: INFO: Number of nodes with available pods: 0 May 29 00:00:23.003: INFO: Node node1 is running more than one daemon pod May 29 00:00:23.999: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:23.999: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:23.999: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:24.002: INFO: Number of nodes with available pods: 0 May 29 00:00:24.002: INFO: Node node1 is running more than one daemon pod May 29 00:00:25.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:25.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:25.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:25.003: INFO: Number of nodes with available pods: 0 May 29 00:00:25.003: INFO: Node node1 is running more than one daemon pod May 29 00:00:26.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:26.000: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:26.000: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:26.003: INFO: Number of nodes with available pods: 0 May 29 00:00:26.003: INFO: Node node1 is running more than one daemon pod May 29 00:00:27.000: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:27.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:27.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:27.007: INFO: Number of nodes with available pods: 0 May 29 00:00:27.007: INFO: Node node1 is running more than one daemon pod May 29 00:00:28.001: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:28.001: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:28.001: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:28.003: INFO: Number of nodes with available pods: 0 May 29 00:00:28.003: INFO: Node node1 is running more than one daemon pod May 29 00:00:28.007: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:28.007: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:28.007: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 29 00:00:28.009: INFO: Number of nodes with available pods: 0 May 29 00:00:28.009: INFO: Node node1 is running more than one daemon pod May 29 00:00:28.009: FAIL: error waiting for daemon pod to start Unexpected error: <*errors.errorString | 0xc0003021f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func3.7() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:323 +0x4d5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a81500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001a81500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001a81500, 0x4de5140) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9408, will wait for the garbage collector to delete the pods May 29 00:00:28.073: INFO: Deleting DaemonSet.extensions daemon-set took: 7.187839ms May 29 00:00:28.773: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.491507ms May 29 00:00:31.976: INFO: Number of nodes with available pods: 0 May 29 00:00:31.976: INFO: Number of running nodes: 0, number of available pods: 0 May 29 00:00:31.983: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9408/daemonsets","resourceVersion":"84453"},"items":null} May 29 00:00:31.986: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9408/pods","resourceVersion":"84453"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "daemonsets-9408". STEP: Found 18 events. May 29 00:00:32.001: INFO: At 2021-05-28 23:55:27 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-6z9sw May 29 00:00:32.001: INFO: At 2021-05-28 23:55:27 +0000 UTC - event for daemon-set-6z9sw: {default-scheduler } Scheduled: Successfully assigned daemonsets-9408/daemon-set-6z9sw to node2 May 29 00:00:32.001: INFO: At 2021-05-28 23:55:28 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-82x45 May 29 00:00:32.001: INFO: At 2021-05-28 23:55:28 +0000 UTC - event for daemon-set-82x45: {default-scheduler } Scheduled: Successfully assigned daemonsets-9408/daemon-set-82x45 to node1 May 29 00:00:32.001: INFO: At 2021-05-28 23:55:29 +0000 UTC - event for daemon-set-6z9sw: {multus } AddedInterface: Add eth0 [10.244.3.128/24] May 29 00:00:32.001: INFO: At 2021-05-28 23:55:29 +0000 UTC - event for daemon-set-6z9sw: {kubelet node2} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine" May 29 00:00:32.001: INFO: At 2021-05-28 23:55:29 +0000 UTC - event for daemon-set-82x45: {kubelet node1} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine" May 29 00:00:32.001: INFO: At 2021-05-28 23:55:29 +0000 UTC - event for daemon-set-82x45: {multus } AddedInterface: Add eth0 [10.244.4.43/24] May 29 00:00:32.001: INFO: At 2021-05-28 23:55:30 +0000 UTC - event for daemon-set-6z9sw: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/httpd:2.4.38-alpine" May 29 00:00:32.001: INFO: At 2021-05-28 23:55:30 +0000 UTC - event for daemon-set-6z9sw: {kubelet node2} Failed: Error: ImagePullBackOff May 29 00:00:32.001: INFO: At 2021-05-28 23:55:30 +0000 UTC - event for daemon-set-6z9sw: {kubelet node2} Failed: Error: ErrImagePull May 29 00:00:32.001: INFO: At 2021-05-28 23:55:30 +0000 UTC - event for daemon-set-6z9sw: {kubelet node2} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 00:00:32.001: INFO: At 2021-05-28 23:55:30 +0000 UTC - event for daemon-set-82x45: {kubelet node1} Failed: Error: ErrImagePull May 29 00:00:32.001: INFO: At 2021-05-28 23:55:30 +0000 UTC - event for daemon-set-82x45: {kubelet node1} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 00:00:32.001: INFO: At 2021-05-28 23:55:31 +0000 UTC - event for daemon-set-82x45: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 29 00:00:32.001: INFO: At 2021-05-28 23:55:34 +0000 UTC - event for daemon-set-82x45: {multus } AddedInterface: Add eth0 [10.244.4.44/24] May 29 00:00:32.001: INFO: At 2021-05-28 23:55:34 +0000 UTC - event for daemon-set-82x45: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/httpd:2.4.38-alpine" May 29 00:00:32.001: INFO: At 2021-05-28 23:55:34 +0000 UTC - event for daemon-set-82x45: {kubelet node1} Failed: Error: ImagePullBackOff May 29 00:00:32.004: INFO: POD NODE PHASE GRACE CONDITIONS May 29 00:00:32.004: INFO: May 29 00:00:32.009: INFO: Logging node info for node master1 May 29 00:00:32.012: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 0aa78934-442a-44a3-8c5c-f827e18dd3d7 84426 0 2021-05-28 19:56:25 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"0a:41:0b:9d:15:5a"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:56:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:02:03 +0000 UTC,LastTransitionTime:2021-05-28 20:02:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 00:00:24 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 00:00:24 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 00:00:24 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 00:00:24 +0000 UTC,LastTransitionTime:2021-05-28 19:59:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7fb2c462cae4b9c990ab2e5c72f7816,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:24c06694-15ae-4da4-9143-144d98afdd8d,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:7f3d9945acdf5d86edd89b2b16fe1f6d63ba8bdb4cab50e66f9bce162df9e388 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:9af6075c93013910787a4e97973da6e0739a86dee1186d7965a5d00b1ac35636 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 00:00:32.013: INFO: Logging kubelet events for node master1 May 29 00:00:32.015: INFO: Logging pods the kubelet thinks is on node master1 May 29 00:00:32.033: INFO: kube-apiserver-master1 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.033: INFO: Container kube-apiserver ready: true, restart count 0 May 29 00:00:32.033: INFO: kube-controller-manager-master1 started at 2021-05-28 19:57:39 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.033: INFO: Container kube-controller-manager ready: true, restart count 2 May 29 00:00:32.033: INFO: kube-multus-ds-amd64-n9j8k started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.033: INFO: Container kube-multus ready: true, restart count 1 May 29 00:00:32.033: INFO: docker-registry-docker-registry-56cbc7bc58-rbghz started at 2021-05-28 20:02:55 +0000 UTC (0+2 container statuses recorded) May 29 00:00:32.033: INFO: Container docker-registry ready: true, restart count 0 May 29 00:00:32.033: INFO: Container nginx ready: true, restart count 0 May 29 00:00:32.033: INFO: prometheus-operator-5bb8cb9d8f-7wdtq started at 2021-05-28 20:10:02 +0000 UTC (0+2 container statuses recorded) May 29 00:00:32.033: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 00:00:32.033: INFO: Container prometheus-operator ready: true, restart count 0 May 29 00:00:32.033: INFO: kube-scheduler-master1 started at 2021-05-28 19:57:39 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.033: INFO: Container kube-scheduler ready: true, restart count 0 May 29 00:00:32.033: INFO: kube-proxy-994p2 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.033: INFO: Container kube-proxy ready: true, restart count 1 May 29 00:00:32.033: INFO: kube-flannel-d54gm started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 00:00:32.033: INFO: Init container install-cni ready: true, restart count 0 May 29 00:00:32.033: INFO: Container kube-flannel ready: true, restart count 2 May 29 00:00:32.033: INFO: coredns-7677f9bb54-zb7h8 started at 2021-05-28 19:59:33 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.033: INFO: Container coredns ready: true, restart count 1 May 29 00:00:32.033: INFO: node-exporter-9b7pq started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 00:00:32.033: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 00:00:32.033: INFO: Container node-exporter ready: true, restart count 0 W0529 00:00:32.046836 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 00:00:32.071: INFO: Latency metrics for node master1 May 29 00:00:32.071: INFO: Logging node info for node master2 May 29 00:00:32.074: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 b80f32b6-a396-4f09-a110-345a08d762ee 84443 0 2021-05-28 19:57:04 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"b2:be:c9:d8:cf:bb"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:57:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:57:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-28 20:06:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:00:49 +0000 UTC,LastTransitionTime:2021-05-28 20:00:49 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 00:00:29 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 00:00:29 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 00:00:29 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 00:00:29 +0000 UTC,LastTransitionTime:2021-05-28 20:00:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2746caf91c53460599f165aa716150cd,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:b63b522f-706f-4e28-a104-c73edcd04319,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 00:00:32.074: INFO: Logging kubelet events for node master2 May 29 00:00:32.076: INFO: Logging pods the kubelet thinks is on node master2 May 29 00:00:32.092: INFO: kube-apiserver-master2 started at 2021-05-28 20:05:31 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.092: INFO: Container kube-apiserver ready: true, restart count 0 May 29 00:00:32.092: INFO: kube-controller-manager-master2 started at 2021-05-28 20:05:41 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.092: INFO: Container kube-controller-manager ready: true, restart count 3 May 29 00:00:32.092: INFO: kube-flannel-xvtkj started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 00:00:32.092: INFO: Init container install-cni ready: true, restart count 0 May 29 00:00:32.092: INFO: Container kube-flannel ready: true, restart count 2 May 29 00:00:32.092: INFO: kube-multus-ds-amd64-qjwcz started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.092: INFO: Container kube-multus ready: true, restart count 1 May 29 00:00:32.092: INFO: node-feature-discovery-controller-5bf5c49849-n9ncl started at 2021-05-28 20:05:52 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.092: INFO: Container nfd-controller ready: true, restart count 0 May 29 00:00:32.092: INFO: kube-scheduler-master2 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.092: INFO: Container kube-scheduler ready: true, restart count 3 May 29 00:00:32.092: INFO: kube-proxy-jkbl8 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.092: INFO: Container kube-proxy ready: true, restart count 1 May 29 00:00:32.092: INFO: dns-autoscaler-5b7b5c9b6f-r797x started at 2021-05-28 19:59:31 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.092: INFO: Container autoscaler ready: true, restart count 1 May 29 00:00:32.092: INFO: node-exporter-frch9 started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 00:00:32.092: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 00:00:32.092: INFO: Container node-exporter ready: true, restart count 0 W0529 00:00:32.104642 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 00:00:32.128: INFO: Latency metrics for node master2 May 29 00:00:32.128: INFO: Logging node info for node master3 May 29 00:00:32.130: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 301b0b5b-fc42-4c78-adb7-75baf6e0cc7e 84456 0 2021-05-28 19:57:14 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"52:fa:ab:49:88:02"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:57:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:57:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:02:12 +0000 UTC,LastTransitionTime:2021-05-28 20:02:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 00:00:32 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 00:00:32 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 00:00:32 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 00:00:32 +0000 UTC,LastTransitionTime:2021-05-28 19:59:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a0cb6c0eb1d842469076fff344213c13,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:c6adcff4-8bf7-40d7-9d14-54b1c6a87bc8,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 00:00:32.131: INFO: Logging kubelet events for node master3 May 29 00:00:32.133: INFO: Logging pods the kubelet thinks is on node master3 May 29 00:00:32.145: INFO: node-exporter-w42s5 started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 00:00:32.145: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 00:00:32.145: INFO: Container node-exporter ready: true, restart count 0 May 29 00:00:32.145: INFO: kube-apiserver-master3 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.145: INFO: Container kube-apiserver ready: true, restart count 0 May 29 00:00:32.145: INFO: kube-controller-manager-master3 started at 2021-05-28 20:06:02 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.145: INFO: Container kube-controller-manager ready: true, restart count 1 May 29 00:00:32.145: INFO: kube-scheduler-master3 started at 2021-05-28 20:01:23 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.146: INFO: Container kube-scheduler ready: true, restart count 1 May 29 00:00:32.146: INFO: kube-proxy-t5bh6 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.146: INFO: Container kube-proxy ready: true, restart count 1 May 29 00:00:32.146: INFO: kube-flannel-zrskq started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 00:00:32.146: INFO: Init container install-cni ready: true, restart count 0 May 29 00:00:32.146: INFO: Container kube-flannel ready: true, restart count 1 May 29 00:00:32.146: INFO: kube-multus-ds-amd64-wqgf7 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.146: INFO: Container kube-multus ready: true, restart count 1 May 29 00:00:32.146: INFO: coredns-7677f9bb54-8v554 started at 2021-05-28 19:59:28 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.146: INFO: Container coredns ready: true, restart count 1 W0529 00:00:32.158850 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 00:00:32.184: INFO: Latency metrics for node master3 May 29 00:00:32.184: INFO: Logging node info for node node1 May 29 00:00:32.188: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 43e51cb4-5acb-42b5-8f26-cd5e977f3829 84444 0 2021-05-28 19:58:22 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"d2:9d:b7:73:58:07"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-28 20:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-28 20:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-28 22:27:50 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-05-28 22:35:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:01:58 +0000 UTC,LastTransitionTime:2021-05-28 20:01:58 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 00:00:29 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 00:00:29 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 00:00:29 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 00:00:29 +0000 UTC,LastTransitionTime:2021-05-28 19:59:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:abe6e95dbfa24a9abd34d8fa2abe7655,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:17719d1f-7df5-4d95-81f3-7d3ac5110ba2,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:d731a0fc49b9ad6125b8d5dcb29da2b60bc940b48eacb6f5a9eb2a55c10598db localhost:30500/barometer-collectd:stable],SizeBytes:1464395058,},ContainerImage{Names:[@ :],SizeBytes:1002495332,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:97953d03767e4c2eb5d156394aeaf4bb0b74f3fd1ad08c303cb7561e272a00ff cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:aa24a0a337084e0747e7c8e97e1131270ae38150e691314f1fa19f4b2f9093c0 golang:alpine3.12],SizeBytes:301156062,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2bec7a43da8efe70cb7cb14020a6b10aecd02c87e020d394de84e6807e2cf620 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392623,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 00:00:32.188: INFO: Logging kubelet events for node node1 May 29 00:00:32.191: INFO: Logging pods the kubelet thinks is on node node1 May 29 00:00:32.227: INFO: kubernetes-dashboard-86c6f9df5b-c5sbq started at 2021-05-28 19:59:33 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.227: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 29 00:00:32.227: INFO: node-feature-discovery-worker-5x4qg started at 2021-05-28 20:05:52 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.227: INFO: Container nfd-worker ready: true, restart count 0 May 29 00:00:32.227: INFO: cmk-init-discover-node1-rvqxm started at 2021-05-28 20:08:32 +0000 UTC (0+3 container statuses recorded) May 29 00:00:32.227: INFO: Container discover ready: false, restart count 0 May 29 00:00:32.227: INFO: Container init ready: false, restart count 0 May 29 00:00:32.227: INFO: Container install ready: false, restart count 0 May 29 00:00:32.227: INFO: prometheus-k8s-0 started at 2021-05-28 20:10:26 +0000 UTC (0+5 container statuses recorded) May 29 00:00:32.227: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 29 00:00:32.227: INFO: Container grafana ready: true, restart count 0 May 29 00:00:32.227: INFO: Container prometheus ready: true, restart count 1 May 29 00:00:32.227: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 29 00:00:32.227: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 29 00:00:32.227: INFO: kube-proxy-lsngv started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.227: INFO: Container kube-proxy ready: true, restart count 2 May 29 00:00:32.227: INFO: kube-multus-ds-amd64-x7826 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.227: INFO: Container kube-multus ready: true, restart count 1 May 29 00:00:32.227: INFO: nginx-proxy-node1 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.227: INFO: Container nginx-proxy ready: true, restart count 1 May 29 00:00:32.227: INFO: kubernetes-metrics-scraper-678c97765c-wblkm started at 2021-05-28 19:59:33 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.227: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 29 00:00:32.227: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zk2pt started at 2021-05-28 20:06:47 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.227: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 00:00:32.227: INFO: cmk-jhzjr started at 2021-05-28 20:09:15 +0000 UTC (0+2 container statuses recorded) May 29 00:00:32.227: INFO: Container nodereport ready: true, restart count 0 May 29 00:00:32.227: INFO: Container reconcile ready: true, restart count 0 May 29 00:00:32.227: INFO: node-exporter-khdpg started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 00:00:32.227: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 00:00:32.227: INFO: Container node-exporter ready: true, restart count 0 May 29 00:00:32.227: INFO: kube-flannel-2tjjt started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 00:00:32.227: INFO: Init container install-cni ready: true, restart count 0 May 29 00:00:32.227: INFO: Container kube-flannel ready: true, restart count 2 May 29 00:00:32.227: INFO: collectd-qw9nd started at 2021-05-28 20:16:29 +0000 UTC (0+3 container statuses recorded) May 29 00:00:32.227: INFO: Container collectd ready: true, restart count 0 May 29 00:00:32.227: INFO: Container collectd-exporter ready: true, restart count 0 May 29 00:00:32.227: INFO: Container rbac-proxy ready: true, restart count 0 W0529 00:00:32.240821 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 00:00:32.280: INFO: Latency metrics for node node1 May 29 00:00:32.280: INFO: Logging node info for node node2 May 29 00:00:32.282: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 3cc89580-b568-4c82-bd1f-200d0823da3b 84419 0 2021-05-28 19:58:22 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"62:22:2c:ae:14:ae"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-28 20:06:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-28 20:08:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-28 22:35:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:01:05 +0000 UTC,LastTransitionTime:2021-05-28 20:01:05 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 00:00:23 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 00:00:23 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 00:00:23 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 00:00:23 +0000 UTC,LastTransitionTime:2021-05-28 19:59:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b2730c4b09814ab9a78e7bc62c820fbb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:f1459072-d21d-46de-a5d9-46ec9349aae0,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:d731a0fc49b9ad6125b8d5dcb29da2b60bc940b48eacb6f5a9eb2a55c10598db localhost:30500/barometer-collectd:stable],SizeBytes:1464395058,},ContainerImage{Names:[localhost:30500/cmk@sha256:97953d03767e4c2eb5d156394aeaf4bb0b74f3fd1ad08c303cb7561e272a00ff localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2bec7a43da8efe70cb7cb14020a6b10aecd02c87e020d394de84e6807e2cf620 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392623,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:7f3d9945acdf5d86edd89b2b16fe1f6d63ba8bdb4cab50e66f9bce162df9e388 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:9af6075c93013910787a4e97973da6e0739a86dee1186d7965a5d00b1ac35636 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 00:00:32.283: INFO: Logging kubelet events for node node2 May 29 00:00:32.285: INFO: Logging pods the kubelet thinks is on node node2 May 29 00:00:32.303: INFO: nginx-proxy-node2 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.303: INFO: Container nginx-proxy ready: true, restart count 2 May 29 00:00:32.303: INFO: collectd-b5dgp started at 2021-05-28 20:16:29 +0000 UTC (0+3 container statuses recorded) May 29 00:00:32.303: INFO: Container collectd ready: true, restart count 0 May 29 00:00:32.303: INFO: Container collectd-exporter ready: true, restart count 0 May 29 00:00:32.303: INFO: Container rbac-proxy ready: true, restart count 0 May 29 00:00:32.303: INFO: kube-proxy-z5czn started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.303: INFO: Container kube-proxy ready: true, restart count 2 May 29 00:00:32.303: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-29vc6 started at 2021-05-28 20:06:47 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.303: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 00:00:32.303: INFO: cmk-init-discover-node2-95cbr started at 2021-05-28 20:08:52 +0000 UTC (0+3 container statuses recorded) May 29 00:00:32.303: INFO: Container discover ready: false, restart count 0 May 29 00:00:32.303: INFO: Container init ready: false, restart count 0 May 29 00:00:32.303: INFO: Container install ready: false, restart count 0 May 29 00:00:32.303: INFO: kube-flannel-d9wsg started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 00:00:32.303: INFO: Init container install-cni ready: true, restart count 2 May 29 00:00:32.303: INFO: Container kube-flannel ready: true, restart count 2 May 29 00:00:32.303: INFO: kube-multus-ds-amd64-c9cj2 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.303: INFO: Container kube-multus ready: true, restart count 1 May 29 00:00:32.303: INFO: node-feature-discovery-worker-j2wnf started at 2021-05-28 20:05:52 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.303: INFO: Container nfd-worker ready: true, restart count 0 May 29 00:00:32.303: INFO: cmk-5vxwj started at 2021-05-28 20:09:16 +0000 UTC (0+2 container statuses recorded) May 29 00:00:32.303: INFO: Container nodereport ready: true, restart count 0 May 29 00:00:32.303: INFO: Container reconcile ready: true, restart count 0 May 29 00:00:32.303: INFO: cmk-webhook-6c9d5f8578-6cpp6 started at 2021-05-28 20:09:16 +0000 UTC (0+1 container statuses recorded) May 29 00:00:32.303: INFO: Container cmk-webhook ready: true, restart count 0 May 29 00:00:32.303: INFO: node-exporter-sjsht started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 00:00:32.303: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 00:00:32.303: INFO: Container node-exporter ready: true, restart count 0 May 29 00:00:32.303: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-csd8v started at 2021-05-28 20:13:00 +0000 UTC (0+2 container statuses recorded) May 29 00:00:32.303: INFO: Container tas-controller ready: true, restart count 0 May 29 00:00:32.303: INFO: Container tas-extender ready: true, restart count 0 W0529 00:00:32.315115 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 00:00:32.345: INFO: Latency metrics for node node2 May 29 00:00:32.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9408" for this suite. • Failure [304.410 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not update pod when spec was updated and update strategy is OnDelete [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:312 May 29 00:00:28.009: error waiting for daemon pod to start Unexpected error: <*errors.errorString | 0xc0003021f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:323 ------------------------------ {"msg":"FAILED [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete","total":4,"completed":0,"skipped":1702,"failed":3,"failures":["[sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer =\u003e should not allow an eviction [Serial]","[sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage =\u003e should not allow an eviction [Serial]","[sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:229 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 00:00:32.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon with node affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:229 May 29 00:00:32.428: INFO: Creating daemon "daemon-set" with a node affinity STEP: Initially, daemon pods should not be running on any nodes. May 29 00:00:32.437: INFO: Number of nodes with available pods: 0 May 29 00:00:32.437: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 29 00:00:32.458: INFO: Number of nodes with available pods: 0 May 29 00:00:32.458: INFO: Node node2 is running more than one daemon pod May 29 00:00:33.462: INFO: Number of nodes with available pods: 0 May 29 00:00:33.462: INFO: Node node2 is running more than one daemon pod May 29 00:00:34.463: INFO: Number of nodes with available pods: 0 May 29 00:00:34.463: INFO: Node node2 is running more than one daemon pod May 29 00:00:35.462: INFO: Number of nodes with available pods: 0 May 29 00:00:35.462: INFO: Node node2 is running more than one daemon pod May 29 00:00:36.462: INFO: Number of nodes with available pods: 0 May 29 00:00:36.462: INFO: Node node2 is running more than one daemon pod May 29 00:00:37.462: INFO: Number of nodes with available pods: 0 May 29 00:00:37.462: INFO: Node node2 is running more than one daemon pod May 29 00:00:38.463: INFO: Number of nodes with available pods: 0 May 29 00:00:38.463: INFO: Node node2 is running more than one daemon pod May 29 00:00:39.462: INFO: Number of nodes with available pods: 0 May 29 00:00:39.462: INFO: Node node2 is running more than one daemon pod May 29 00:00:40.462: INFO: Number of nodes with available pods: 0 May 29 00:00:40.462: INFO: Node node2 is running more than one daemon pod May 29 00:00:41.463: INFO: Number of nodes with available pods: 0 May 29 00:00:41.463: INFO: Node node2 is running more than one daemon pod May 29 00:00:42.461: INFO: Number of nodes with available pods: 0 May 29 00:00:42.461: INFO: Node node2 is running more than one daemon pod May 29 00:00:43.462: INFO: Number of nodes with available pods: 0 May 29 00:00:43.462: INFO: Node node2 is running more than one daemon pod May 29 00:00:44.462: INFO: Number of nodes with available pods: 0 May 29 00:00:44.462: INFO: Node node2 is running more than one daemon pod May 29 00:00:45.461: INFO: Number of nodes with available pods: 0 May 29 00:00:45.462: INFO: Node node2 is running more than one daemon pod May 29 00:00:46.463: INFO: Number of nodes with available pods: 0 May 29 00:00:46.463: INFO: Node node2 is running more than one daemon pod May 29 00:00:47.462: INFO: Number of nodes with available pods: 0 May 29 00:00:47.462: INFO: Node node2 is running more than one daemon pod May 29 00:00:48.463: INFO: Number of nodes with available pods: 0 May 29 00:00:48.463: INFO: Node node2 is running more than one daemon pod May 29 00:00:49.463: INFO: Number of nodes with available pods: 0 May 29 00:00:49.463: INFO: Node node2 is running more than one daemon pod May 29 00:00:50.462: INFO: Number of nodes with available pods: 0 May 29 00:00:50.462: INFO: Node node2 is running more than one daemon pod May 29 00:00:51.464: INFO: Number of nodes with available pods: 0 May 29 00:00:51.464: INFO: Node node2 is running more than one daemon pod May 29 00:00:52.463: INFO: Number of nodes with available pods: 0 May 29 00:00:52.463: INFO: Node node2 is running more than one daemon pod May 29 00:00:53.463: INFO: Number of nodes with available pods: 0 May 29 00:00:53.463: INFO: Node node2 is running more than one daemon pod May 29 00:00:54.463: INFO: Number of nodes with available pods: 0 May 29 00:00:54.463: INFO: Node node2 is running more than one daemon pod May 29 00:00:55.462: INFO: Number of nodes with available pods: 0 May 29 00:00:55.462: INFO: Node node2 is running more than one daemon pod May 29 00:00:56.464: INFO: Number of nodes with available pods: 0 May 29 00:00:56.464: INFO: Node node2 is running more than one daemon pod May 29 00:00:57.464: INFO: Number of nodes with available pods: 0 May 29 00:00:57.464: INFO: Node node2 is running more than one daemon pod May 29 00:00:58.464: INFO: Number of nodes with available pods: 0 May 29 00:00:58.464: INFO: Node node2 is running more than one daemon pod May 29 00:00:59.462: INFO: Number of nodes with available pods: 0 May 29 00:00:59.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:00.462: INFO: Number of nodes with available pods: 0 May 29 00:01:00.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:01.462: INFO: Number of nodes with available pods: 0 May 29 00:01:01.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:02.464: INFO: Number of nodes with available pods: 0 May 29 00:01:02.464: INFO: Node node2 is running more than one daemon pod May 29 00:01:03.464: INFO: Number of nodes with available pods: 0 May 29 00:01:03.464: INFO: Node node2 is running more than one daemon pod May 29 00:01:04.462: INFO: Number of nodes with available pods: 0 May 29 00:01:04.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:05.463: INFO: Number of nodes with available pods: 0 May 29 00:01:05.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:06.464: INFO: Number of nodes with available pods: 0 May 29 00:01:06.464: INFO: Node node2 is running more than one daemon pod May 29 00:01:07.464: INFO: Number of nodes with available pods: 0 May 29 00:01:07.464: INFO: Node node2 is running more than one daemon pod May 29 00:01:08.463: INFO: Number of nodes with available pods: 0 May 29 00:01:08.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:09.464: INFO: Number of nodes with available pods: 0 May 29 00:01:09.464: INFO: Node node2 is running more than one daemon pod May 29 00:01:10.462: INFO: Number of nodes with available pods: 0 May 29 00:01:10.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:11.463: INFO: Number of nodes with available pods: 0 May 29 00:01:11.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:12.461: INFO: Number of nodes with available pods: 0 May 29 00:01:12.461: INFO: Node node2 is running more than one daemon pod May 29 00:01:13.462: INFO: Number of nodes with available pods: 0 May 29 00:01:13.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:14.462: INFO: Number of nodes with available pods: 0 May 29 00:01:14.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:15.462: INFO: Number of nodes with available pods: 0 May 29 00:01:15.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:16.462: INFO: Number of nodes with available pods: 0 May 29 00:01:16.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:17.462: INFO: Number of nodes with available pods: 0 May 29 00:01:17.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:18.463: INFO: Number of nodes with available pods: 0 May 29 00:01:18.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:19.462: INFO: Number of nodes with available pods: 0 May 29 00:01:19.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:20.462: INFO: Number of nodes with available pods: 0 May 29 00:01:20.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:21.463: INFO: Number of nodes with available pods: 0 May 29 00:01:21.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:22.462: INFO: Number of nodes with available pods: 0 May 29 00:01:22.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:23.464: INFO: Number of nodes with available pods: 0 May 29 00:01:23.464: INFO: Node node2 is running more than one daemon pod May 29 00:01:24.463: INFO: Number of nodes with available pods: 0 May 29 00:01:24.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:25.462: INFO: Number of nodes with available pods: 0 May 29 00:01:25.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:26.464: INFO: Number of nodes with available pods: 0 May 29 00:01:26.464: INFO: Node node2 is running more than one daemon pod May 29 00:01:27.463: INFO: Number of nodes with available pods: 0 May 29 00:01:27.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:28.463: INFO: Number of nodes with available pods: 0 May 29 00:01:28.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:29.463: INFO: Number of nodes with available pods: 0 May 29 00:01:29.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:30.461: INFO: Number of nodes with available pods: 0 May 29 00:01:30.461: INFO: Node node2 is running more than one daemon pod May 29 00:01:31.464: INFO: Number of nodes with available pods: 0 May 29 00:01:31.464: INFO: Node node2 is running more than one daemon pod May 29 00:01:32.465: INFO: Number of nodes with available pods: 0 May 29 00:01:32.465: INFO: Node node2 is running more than one daemon pod May 29 00:01:33.463: INFO: Number of nodes with available pods: 0 May 29 00:01:33.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:34.462: INFO: Number of nodes with available pods: 0 May 29 00:01:34.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:35.463: INFO: Number of nodes with available pods: 0 May 29 00:01:35.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:36.464: INFO: Number of nodes with available pods: 0 May 29 00:01:36.464: INFO: Node node2 is running more than one daemon pod May 29 00:01:37.463: INFO: Number of nodes with available pods: 0 May 29 00:01:37.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:38.466: INFO: Number of nodes with available pods: 0 May 29 00:01:38.466: INFO: Node node2 is running more than one daemon pod May 29 00:01:39.463: INFO: Number of nodes with available pods: 0 May 29 00:01:39.464: INFO: Node node2 is running more than one daemon pod May 29 00:01:40.462: INFO: Number of nodes with available pods: 0 May 29 00:01:40.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:41.464: INFO: Number of nodes with available pods: 0 May 29 00:01:41.464: INFO: Node node2 is running more than one daemon pod May 29 00:01:42.464: INFO: Number of nodes with available pods: 0 May 29 00:01:42.464: INFO: Node node2 is running more than one daemon pod May 29 00:01:43.462: INFO: Number of nodes with available pods: 0 May 29 00:01:43.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:44.463: INFO: Number of nodes with available pods: 0 May 29 00:01:44.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:45.461: INFO: Number of nodes with available pods: 0 May 29 00:01:45.461: INFO: Node node2 is running more than one daemon pod May 29 00:01:46.462: INFO: Number of nodes with available pods: 0 May 29 00:01:46.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:47.465: INFO: Number of nodes with available pods: 0 May 29 00:01:47.465: INFO: Node node2 is running more than one daemon pod May 29 00:01:48.463: INFO: Number of nodes with available pods: 0 May 29 00:01:48.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:49.463: INFO: Number of nodes with available pods: 0 May 29 00:01:49.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:50.462: INFO: Number of nodes with available pods: 0 May 29 00:01:50.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:51.465: INFO: Number of nodes with available pods: 0 May 29 00:01:51.465: INFO: Node node2 is running more than one daemon pod May 29 00:01:52.463: INFO: Number of nodes with available pods: 0 May 29 00:01:52.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:53.463: INFO: Number of nodes with available pods: 0 May 29 00:01:53.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:54.462: INFO: Number of nodes with available pods: 0 May 29 00:01:54.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:55.461: INFO: Number of nodes with available pods: 0 May 29 00:01:55.461: INFO: Node node2 is running more than one daemon pod May 29 00:01:56.463: INFO: Number of nodes with available pods: 0 May 29 00:01:56.463: INFO: Node node2 is running more than one daemon pod May 29 00:01:57.462: INFO: Number of nodes with available pods: 0 May 29 00:01:57.462: INFO: Node node2 is running more than one daemon pod May 29 00:01:58.464: INFO: Number of nodes with available pods: 0 May 29 00:01:58.464: INFO: Node node2 is running more than one daemon pod May 29 00:01:59.463: INFO: Number of nodes with available pods: 0 May 29 00:01:59.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:00.462: INFO: Number of nodes with available pods: 0 May 29 00:02:00.462: INFO: Node node2 is running more than one daemon pod May 29 00:02:01.463: INFO: Number of nodes with available pods: 0 May 29 00:02:01.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:02.464: INFO: Number of nodes with available pods: 0 May 29 00:02:02.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:03.464: INFO: Number of nodes with available pods: 0 May 29 00:02:03.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:04.463: INFO: Number of nodes with available pods: 0 May 29 00:02:04.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:05.461: INFO: Number of nodes with available pods: 0 May 29 00:02:05.461: INFO: Node node2 is running more than one daemon pod May 29 00:02:06.463: INFO: Number of nodes with available pods: 0 May 29 00:02:06.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:07.463: INFO: Number of nodes with available pods: 0 May 29 00:02:07.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:08.465: INFO: Number of nodes with available pods: 0 May 29 00:02:08.465: INFO: Node node2 is running more than one daemon pod May 29 00:02:09.462: INFO: Number of nodes with available pods: 0 May 29 00:02:09.462: INFO: Node node2 is running more than one daemon pod May 29 00:02:10.462: INFO: Number of nodes with available pods: 0 May 29 00:02:10.462: INFO: Node node2 is running more than one daemon pod May 29 00:02:11.464: INFO: Number of nodes with available pods: 0 May 29 00:02:11.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:12.464: INFO: Number of nodes with available pods: 0 May 29 00:02:12.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:13.462: INFO: Number of nodes with available pods: 0 May 29 00:02:13.462: INFO: Node node2 is running more than one daemon pod May 29 00:02:14.465: INFO: Number of nodes with available pods: 0 May 29 00:02:14.466: INFO: Node node2 is running more than one daemon pod May 29 00:02:15.461: INFO: Number of nodes with available pods: 0 May 29 00:02:15.461: INFO: Node node2 is running more than one daemon pod May 29 00:02:16.464: INFO: Number of nodes with available pods: 0 May 29 00:02:16.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:17.464: INFO: Number of nodes with available pods: 0 May 29 00:02:17.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:18.464: INFO: Number of nodes with available pods: 0 May 29 00:02:18.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:19.464: INFO: Number of nodes with available pods: 0 May 29 00:02:19.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:20.464: INFO: Number of nodes with available pods: 0 May 29 00:02:20.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:21.463: INFO: Number of nodes with available pods: 0 May 29 00:02:21.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:22.464: INFO: Number of nodes with available pods: 0 May 29 00:02:22.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:23.464: INFO: Number of nodes with available pods: 0 May 29 00:02:23.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:24.462: INFO: Number of nodes with available pods: 0 May 29 00:02:24.462: INFO: Node node2 is running more than one daemon pod May 29 00:02:25.463: INFO: Number of nodes with available pods: 0 May 29 00:02:25.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:26.465: INFO: Number of nodes with available pods: 0 May 29 00:02:26.465: INFO: Node node2 is running more than one daemon pod May 29 00:02:27.464: INFO: Number of nodes with available pods: 0 May 29 00:02:27.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:28.464: INFO: Number of nodes with available pods: 0 May 29 00:02:28.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:29.462: INFO: Number of nodes with available pods: 0 May 29 00:02:29.462: INFO: Node node2 is running more than one daemon pod May 29 00:02:30.464: INFO: Number of nodes with available pods: 0 May 29 00:02:30.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:31.466: INFO: Number of nodes with available pods: 0 May 29 00:02:31.466: INFO: Node node2 is running more than one daemon pod May 29 00:02:32.466: INFO: Number of nodes with available pods: 0 May 29 00:02:32.466: INFO: Node node2 is running more than one daemon pod May 29 00:02:33.463: INFO: Number of nodes with available pods: 0 May 29 00:02:33.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:34.462: INFO: Number of nodes with available pods: 0 May 29 00:02:34.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:35.462: INFO: Number of nodes with available pods: 0 May 29 00:02:35.462: INFO: Node node2 is running more than one daemon pod May 29 00:02:36.461: INFO: Number of nodes with available pods: 0 May 29 00:02:36.462: INFO: Node node2 is running more than one daemon pod May 29 00:02:37.463: INFO: Number of nodes with available pods: 0 May 29 00:02:37.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:38.464: INFO: Number of nodes with available pods: 0 May 29 00:02:38.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:39.464: INFO: Number of nodes with available pods: 0 May 29 00:02:39.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:40.464: INFO: Number of nodes with available pods: 0 May 29 00:02:40.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:41.463: INFO: Number of nodes with available pods: 0 May 29 00:02:41.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:42.463: INFO: Number of nodes with available pods: 0 May 29 00:02:42.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:43.466: INFO: Number of nodes with available pods: 0 May 29 00:02:43.466: INFO: Node node2 is running more than one daemon pod May 29 00:02:44.463: INFO: Number of nodes with available pods: 0 May 29 00:02:44.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:45.462: INFO: Number of nodes with available pods: 0 May 29 00:02:45.462: INFO: Node node2 is running more than one daemon pod May 29 00:02:46.465: INFO: Number of nodes with available pods: 0 May 29 00:02:46.465: INFO: Node node2 is running more than one daemon pod May 29 00:02:47.464: INFO: Number of nodes with available pods: 0 May 29 00:02:47.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:48.465: INFO: Number of nodes with available pods: 0 May 29 00:02:48.465: INFO: Node node2 is running more than one daemon pod May 29 00:02:49.463: INFO: Number of nodes with available pods: 0 May 29 00:02:49.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:50.464: INFO: Number of nodes with available pods: 0 May 29 00:02:50.464: INFO: Node node2 is running more than one daemon pod May 29 00:02:51.463: INFO: Number of nodes with available pods: 0 May 29 00:02:51.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:52.463: INFO: Number of nodes with available pods: 0 May 29 00:02:52.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:53.463: INFO: Number of nodes with available pods: 0 May 29 00:02:53.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:54.463: INFO: Number of nodes with available pods: 0 May 29 00:02:54.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:55.465: INFO: Number of nodes with available pods: 0 May 29 00:02:55.465: INFO: Node node2 is running more than one daemon pod May 29 00:02:56.462: INFO: Number of nodes with available pods: 0 May 29 00:02:56.462: INFO: Node node2 is running more than one daemon pod May 29 00:02:57.462: INFO: Number of nodes with available pods: 0 May 29 00:02:57.462: INFO: Node node2 is running more than one daemon pod May 29 00:02:58.463: INFO: Number of nodes with available pods: 0 May 29 00:02:58.463: INFO: Node node2 is running more than one daemon pod May 29 00:02:59.463: INFO: Number of nodes with available pods: 0 May 29 00:02:59.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:00.465: INFO: Number of nodes with available pods: 0 May 29 00:03:00.465: INFO: Node node2 is running more than one daemon pod May 29 00:03:01.463: INFO: Number of nodes with available pods: 0 May 29 00:03:01.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:02.463: INFO: Number of nodes with available pods: 0 May 29 00:03:02.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:03.464: INFO: Number of nodes with available pods: 0 May 29 00:03:03.464: INFO: Node node2 is running more than one daemon pod May 29 00:03:04.463: INFO: Number of nodes with available pods: 0 May 29 00:03:04.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:05.464: INFO: Number of nodes with available pods: 0 May 29 00:03:05.464: INFO: Node node2 is running more than one daemon pod May 29 00:03:06.464: INFO: Number of nodes with available pods: 0 May 29 00:03:06.464: INFO: Node node2 is running more than one daemon pod May 29 00:03:07.465: INFO: Number of nodes with available pods: 0 May 29 00:03:07.465: INFO: Node node2 is running more than one daemon pod May 29 00:03:08.464: INFO: Number of nodes with available pods: 0 May 29 00:03:08.464: INFO: Node node2 is running more than one daemon pod May 29 00:03:09.463: INFO: Number of nodes with available pods: 0 May 29 00:03:09.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:10.462: INFO: Number of nodes with available pods: 0 May 29 00:03:10.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:11.464: INFO: Number of nodes with available pods: 0 May 29 00:03:11.464: INFO: Node node2 is running more than one daemon pod May 29 00:03:12.464: INFO: Number of nodes with available pods: 0 May 29 00:03:12.464: INFO: Node node2 is running more than one daemon pod May 29 00:03:13.463: INFO: Number of nodes with available pods: 0 May 29 00:03:13.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:14.462: INFO: Number of nodes with available pods: 0 May 29 00:03:14.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:15.464: INFO: Number of nodes with available pods: 0 May 29 00:03:15.464: INFO: Node node2 is running more than one daemon pod May 29 00:03:16.463: INFO: Number of nodes with available pods: 0 May 29 00:03:16.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:17.463: INFO: Number of nodes with available pods: 0 May 29 00:03:17.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:18.462: INFO: Number of nodes with available pods: 0 May 29 00:03:18.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:19.463: INFO: Number of nodes with available pods: 0 May 29 00:03:19.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:20.463: INFO: Number of nodes with available pods: 0 May 29 00:03:20.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:21.462: INFO: Number of nodes with available pods: 0 May 29 00:03:21.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:22.464: INFO: Number of nodes with available pods: 0 May 29 00:03:22.464: INFO: Node node2 is running more than one daemon pod May 29 00:03:23.462: INFO: Number of nodes with available pods: 0 May 29 00:03:23.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:24.462: INFO: Number of nodes with available pods: 0 May 29 00:03:24.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:25.461: INFO: Number of nodes with available pods: 0 May 29 00:03:25.461: INFO: Node node2 is running more than one daemon pod May 29 00:03:26.463: INFO: Number of nodes with available pods: 0 May 29 00:03:26.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:27.463: INFO: Number of nodes with available pods: 0 May 29 00:03:27.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:28.463: INFO: Number of nodes with available pods: 0 May 29 00:03:28.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:29.462: INFO: Number of nodes with available pods: 0 May 29 00:03:29.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:30.464: INFO: Number of nodes with available pods: 0 May 29 00:03:30.464: INFO: Node node2 is running more than one daemon pod May 29 00:03:31.462: INFO: Number of nodes with available pods: 0 May 29 00:03:31.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:32.463: INFO: Number of nodes with available pods: 0 May 29 00:03:32.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:33.463: INFO: Number of nodes with available pods: 0 May 29 00:03:33.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:34.463: INFO: Number of nodes with available pods: 0 May 29 00:03:34.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:35.464: INFO: Number of nodes with available pods: 0 May 29 00:03:35.464: INFO: Node node2 is running more than one daemon pod May 29 00:03:36.464: INFO: Number of nodes with available pods: 0 May 29 00:03:36.464: INFO: Node node2 is running more than one daemon pod May 29 00:03:37.463: INFO: Number of nodes with available pods: 0 May 29 00:03:37.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:38.462: INFO: Number of nodes with available pods: 0 May 29 00:03:38.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:39.462: INFO: Number of nodes with available pods: 0 May 29 00:03:39.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:40.463: INFO: Number of nodes with available pods: 0 May 29 00:03:40.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:41.462: INFO: Number of nodes with available pods: 0 May 29 00:03:41.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:42.463: INFO: Number of nodes with available pods: 0 May 29 00:03:42.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:43.462: INFO: Number of nodes with available pods: 0 May 29 00:03:43.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:44.462: INFO: Number of nodes with available pods: 0 May 29 00:03:44.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:45.463: INFO: Number of nodes with available pods: 0 May 29 00:03:45.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:46.465: INFO: Number of nodes with available pods: 0 May 29 00:03:46.465: INFO: Node node2 is running more than one daemon pod May 29 00:03:47.464: INFO: Number of nodes with available pods: 0 May 29 00:03:47.464: INFO: Node node2 is running more than one daemon pod May 29 00:03:48.465: INFO: Number of nodes with available pods: 0 May 29 00:03:48.465: INFO: Node node2 is running more than one daemon pod May 29 00:03:49.463: INFO: Number of nodes with available pods: 0 May 29 00:03:49.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:50.462: INFO: Number of nodes with available pods: 0 May 29 00:03:50.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:51.463: INFO: Number of nodes with available pods: 0 May 29 00:03:51.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:52.463: INFO: Number of nodes with available pods: 0 May 29 00:03:52.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:53.463: INFO: Number of nodes with available pods: 0 May 29 00:03:53.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:54.462: INFO: Number of nodes with available pods: 0 May 29 00:03:54.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:55.462: INFO: Number of nodes with available pods: 0 May 29 00:03:55.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:56.463: INFO: Number of nodes with available pods: 0 May 29 00:03:56.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:57.462: INFO: Number of nodes with available pods: 0 May 29 00:03:57.462: INFO: Node node2 is running more than one daemon pod May 29 00:03:58.463: INFO: Number of nodes with available pods: 0 May 29 00:03:58.463: INFO: Node node2 is running more than one daemon pod May 29 00:03:59.462: INFO: Number of nodes with available pods: 0 May 29 00:03:59.462: INFO: Node node2 is running more than one daemon pod May 29 00:04:00.462: INFO: Number of nodes with available pods: 0 May 29 00:04:00.462: INFO: Node node2 is running more than one daemon pod May 29 00:04:01.463: INFO: Number of nodes with available pods: 0 May 29 00:04:01.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:02.466: INFO: Number of nodes with available pods: 0 May 29 00:04:02.466: INFO: Node node2 is running more than one daemon pod May 29 00:04:03.465: INFO: Number of nodes with available pods: 0 May 29 00:04:03.465: INFO: Node node2 is running more than one daemon pod May 29 00:04:04.463: INFO: Number of nodes with available pods: 0 May 29 00:04:04.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:05.463: INFO: Number of nodes with available pods: 0 May 29 00:04:05.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:06.465: INFO: Number of nodes with available pods: 0 May 29 00:04:06.465: INFO: Node node2 is running more than one daemon pod May 29 00:04:07.464: INFO: Number of nodes with available pods: 0 May 29 00:04:07.464: INFO: Node node2 is running more than one daemon pod May 29 00:04:08.465: INFO: Number of nodes with available pods: 0 May 29 00:04:08.465: INFO: Node node2 is running more than one daemon pod May 29 00:04:09.463: INFO: Number of nodes with available pods: 0 May 29 00:04:09.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:10.465: INFO: Number of nodes with available pods: 0 May 29 00:04:10.465: INFO: Node node2 is running more than one daemon pod May 29 00:04:11.463: INFO: Number of nodes with available pods: 0 May 29 00:04:11.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:12.465: INFO: Number of nodes with available pods: 0 May 29 00:04:12.465: INFO: Node node2 is running more than one daemon pod May 29 00:04:13.463: INFO: Number of nodes with available pods: 0 May 29 00:04:13.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:14.463: INFO: Number of nodes with available pods: 0 May 29 00:04:14.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:15.463: INFO: Number of nodes with available pods: 0 May 29 00:04:15.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:16.463: INFO: Number of nodes with available pods: 0 May 29 00:04:16.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:17.463: INFO: Number of nodes with available pods: 0 May 29 00:04:17.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:18.463: INFO: Number of nodes with available pods: 0 May 29 00:04:18.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:19.463: INFO: Number of nodes with available pods: 0 May 29 00:04:19.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:20.463: INFO: Number of nodes with available pods: 0 May 29 00:04:20.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:21.463: INFO: Number of nodes with available pods: 0 May 29 00:04:21.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:22.465: INFO: Number of nodes with available pods: 0 May 29 00:04:22.465: INFO: Node node2 is running more than one daemon pod May 29 00:04:23.464: INFO: Number of nodes with available pods: 0 May 29 00:04:23.464: INFO: Node node2 is running more than one daemon pod May 29 00:04:24.462: INFO: Number of nodes with available pods: 0 May 29 00:04:24.462: INFO: Node node2 is running more than one daemon pod May 29 00:04:25.463: INFO: Number of nodes with available pods: 0 May 29 00:04:25.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:26.466: INFO: Number of nodes with available pods: 0 May 29 00:04:26.466: INFO: Node node2 is running more than one daemon pod May 29 00:04:27.464: INFO: Number of nodes with available pods: 0 May 29 00:04:27.464: INFO: Node node2 is running more than one daemon pod May 29 00:04:28.462: INFO: Number of nodes with available pods: 0 May 29 00:04:28.462: INFO: Node node2 is running more than one daemon pod May 29 00:04:29.462: INFO: Number of nodes with available pods: 0 May 29 00:04:29.462: INFO: Node node2 is running more than one daemon pod May 29 00:04:30.464: INFO: Number of nodes with available pods: 0 May 29 00:04:30.464: INFO: Node node2 is running more than one daemon pod May 29 00:04:31.462: INFO: Number of nodes with available pods: 0 May 29 00:04:31.462: INFO: Node node2 is running more than one daemon pod May 29 00:04:32.465: INFO: Number of nodes with available pods: 0 May 29 00:04:32.465: INFO: Node node2 is running more than one daemon pod May 29 00:04:33.464: INFO: Number of nodes with available pods: 0 May 29 00:04:33.464: INFO: Node node2 is running more than one daemon pod May 29 00:04:34.462: INFO: Number of nodes with available pods: 0 May 29 00:04:34.462: INFO: Node node2 is running more than one daemon pod May 29 00:04:35.462: INFO: Number of nodes with available pods: 0 May 29 00:04:35.462: INFO: Node node2 is running more than one daemon pod May 29 00:04:36.463: INFO: Number of nodes with available pods: 0 May 29 00:04:36.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:37.463: INFO: Number of nodes with available pods: 0 May 29 00:04:37.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:38.464: INFO: Number of nodes with available pods: 0 May 29 00:04:38.464: INFO: Node node2 is running more than one daemon pod May 29 00:04:39.463: INFO: Number of nodes with available pods: 0 May 29 00:04:39.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:40.462: INFO: Number of nodes with available pods: 0 May 29 00:04:40.462: INFO: Node node2 is running more than one daemon pod May 29 00:04:41.463: INFO: Number of nodes with available pods: 0 May 29 00:04:41.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:42.464: INFO: Number of nodes with available pods: 0 May 29 00:04:42.464: INFO: Node node2 is running more than one daemon pod May 29 00:04:43.464: INFO: Number of nodes with available pods: 0 May 29 00:04:43.464: INFO: Node node2 is running more than one daemon pod May 29 00:04:44.462: INFO: Number of nodes with available pods: 0 May 29 00:04:44.462: INFO: Node node2 is running more than one daemon pod May 29 00:04:45.462: INFO: Number of nodes with available pods: 0 May 29 00:04:45.462: INFO: Node node2 is running more than one daemon pod May 29 00:04:46.464: INFO: Number of nodes with available pods: 0 May 29 00:04:46.464: INFO: Node node2 is running more than one daemon pod May 29 00:04:47.464: INFO: Number of nodes with available pods: 0 May 29 00:04:47.464: INFO: Node node2 is running more than one daemon pod May 29 00:04:48.464: INFO: Number of nodes with available pods: 0 May 29 00:04:48.464: INFO: Node node2 is running more than one daemon pod May 29 00:04:49.463: INFO: Number of nodes with available pods: 0 May 29 00:04:49.463: INFO: Node node2 is running more than one daemon pod May 29 00:04:50.465: INFO: Number of nodes with available pods: 0 May 29 00:04:50.465: INFO: Node node2 is running more than one daemon pod May 29 00:04:51.465: INFO: Number of nodes with available pods: 0 May 29 00:04:51.465: INFO: Node node2 is running more than one daemon pod May 29 00:04:52.465: INFO: Number of nodes with available pods: 0 May 29 00:04:52.465: INFO: Node node2 is running more than one daemon pod May 29 00:04:53.464: INFO: Number of nodes with available pods: 0 May 29 00:04:53.464: INFO: Node node2 is running more than one daemon pod May 29 00:04:54.464: INFO: Number of nodes with available pods: 0 May 29 00:04:54.464: INFO: Node node2 is running more than one daemon pod May 29 00:04:55.464: INFO: Number of nodes with available pods: 0 May 29 00:04:55.464: INFO: Node node2 is running more than one daemon pod May 29 00:04:56.464: INFO: Number of nodes with available pods: 0 May 29 00:04:56.464: INFO: Node node2 is running more than one daemon pod May 29 00:04:57.462: INFO: Number of nodes with available pods: 0 May 29 00:04:57.462: INFO: Node node2 is running more than one daemon pod May 29 00:04:58.464: INFO: Number of nodes with available pods: 0 May 29 00:04:58.464: INFO: Node node2 is running more than one daemon pod May 29 00:04:59.463: INFO: Number of nodes with available pods: 0 May 29 00:04:59.463: INFO: Node node2 is running more than one daemon pod May 29 00:05:00.465: INFO: Number of nodes with available pods: 0 May 29 00:05:00.465: INFO: Node node2 is running more than one daemon pod May 29 00:05:01.462: INFO: Number of nodes with available pods: 0 May 29 00:05:01.462: INFO: Node node2 is running more than one daemon pod May 29 00:05:02.464: INFO: Number of nodes with available pods: 0 May 29 00:05:02.464: INFO: Node node2 is running more than one daemon pod May 29 00:05:03.463: INFO: Number of nodes with available pods: 0 May 29 00:05:03.463: INFO: Node node2 is running more than one daemon pod May 29 00:05:04.462: INFO: Number of nodes with available pods: 0 May 29 00:05:04.462: INFO: Node node2 is running more than one daemon pod May 29 00:05:05.462: INFO: Number of nodes with available pods: 0 May 29 00:05:05.462: INFO: Node node2 is running more than one daemon pod May 29 00:05:06.463: INFO: Number of nodes with available pods: 0 May 29 00:05:06.463: INFO: Node node2 is running more than one daemon pod May 29 00:05:07.463: INFO: Number of nodes with available pods: 0 May 29 00:05:07.463: INFO: Node node2 is running more than one daemon pod May 29 00:05:08.463: INFO: Number of nodes with available pods: 0 May 29 00:05:08.463: INFO: Node node2 is running more than one daemon pod May 29 00:05:09.463: INFO: Number of nodes with available pods: 0 May 29 00:05:09.463: INFO: Node node2 is running more than one daemon pod May 29 00:05:10.464: INFO: Number of nodes with available pods: 0 May 29 00:05:10.464: INFO: Node node2 is running more than one daemon pod May 29 00:05:11.464: INFO: Number of nodes with available pods: 0 May 29 00:05:11.464: INFO: Node node2 is running more than one daemon pod May 29 00:05:12.462: INFO: Number of nodes with available pods: 0 May 29 00:05:12.462: INFO: Node node2 is running more than one daemon pod May 29 00:05:13.462: INFO: Number of nodes with available pods: 0 May 29 00:05:13.462: INFO: Node node2 is running more than one daemon pod May 29 00:05:14.462: INFO: Number of nodes with available pods: 0 May 29 00:05:14.462: INFO: Node node2 is running more than one daemon pod May 29 00:05:15.462: INFO: Number of nodes with available pods: 0 May 29 00:05:15.462: INFO: Node node2 is running more than one daemon pod May 29 00:05:16.464: INFO: Number of nodes with available pods: 0 May 29 00:05:16.464: INFO: Node node2 is running more than one daemon pod May 29 00:05:17.462: INFO: Number of nodes with available pods: 0 May 29 00:05:17.462: INFO: Node node2 is running more than one daemon pod May 29 00:05:18.463: INFO: Number of nodes with available pods: 0 May 29 00:05:18.463: INFO: Node node2 is running more than one daemon pod May 29 00:05:19.464: INFO: Number of nodes with available pods: 0 May 29 00:05:19.464: INFO: Node node2 is running more than one daemon pod May 29 00:05:20.463: INFO: Number of nodes with available pods: 0 May 29 00:05:20.463: INFO: Node node2 is running more than one daemon pod May 29 00:05:21.462: INFO: Number of nodes with available pods: 0 May 29 00:05:21.462: INFO: Node node2 is running more than one daemon pod May 29 00:05:22.464: INFO: Number of nodes with available pods: 0 May 29 00:05:22.464: INFO: Node node2 is running more than one daemon pod May 29 00:05:23.464: INFO: Number of nodes with available pods: 0 May 29 00:05:23.464: INFO: Node node2 is running more than one daemon pod May 29 00:05:24.463: INFO: Number of nodes with available pods: 0 May 29 00:05:24.463: INFO: Node node2 is running more than one daemon pod May 29 00:05:25.462: INFO: Number of nodes with available pods: 0 May 29 00:05:25.462: INFO: Node node2 is running more than one daemon pod May 29 00:05:26.464: INFO: Number of nodes with available pods: 0 May 29 00:05:26.464: INFO: Node node2 is running more than one daemon pod May 29 00:05:27.464: INFO: Number of nodes with available pods: 0 May 29 00:05:27.464: INFO: Node node2 is running more than one daemon pod May 29 00:05:28.465: INFO: Number of nodes with available pods: 0 May 29 00:05:28.465: INFO: Node node2 is running more than one daemon pod May 29 00:05:29.462: INFO: Number of nodes with available pods: 0 May 29 00:05:29.462: INFO: Node node2 is running more than one daemon pod May 29 00:05:30.464: INFO: Number of nodes with available pods: 0 May 29 00:05:30.464: INFO: Node node2 is running more than one daemon pod May 29 00:05:31.464: INFO: Number of nodes with available pods: 0 May 29 00:05:31.464: INFO: Node node2 is running more than one daemon pod May 29 00:05:32.464: INFO: Number of nodes with available pods: 0 May 29 00:05:32.464: INFO: Node node2 is running more than one daemon pod May 29 00:05:32.467: INFO: Number of nodes with available pods: 0 May 29 00:05:32.467: INFO: Node node2 is running more than one daemon pod May 29 00:05:32.468: FAIL: error waiting for daemon pods to be running on new nodes Unexpected error: <*errors.errorString | 0xc0003021f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func3.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:266 +0xa0e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a81500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001a81500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001a81500, 0x4de5140) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1433, will wait for the garbage collector to delete the pods May 29 00:05:32.534: INFO: Deleting DaemonSet.extensions daemon-set took: 8.714206ms May 29 00:05:33.234: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.346208ms May 29 00:05:37.040: INFO: Number of nodes with available pods: 0 May 29 00:05:37.040: INFO: Number of running nodes: 0, number of available pods: 0 May 29 00:05:37.043: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1433/daemonsets","resourceVersion":"85602"},"items":null} May 29 00:05:37.046: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1433/pods","resourceVersion":"85602"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "daemonsets-1433". STEP: Found 8 events. May 29 00:05:37.066: INFO: At 2021-05-29 00:00:32 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-d6wlz May 29 00:05:37.066: INFO: At 2021-05-29 00:00:32 +0000 UTC - event for daemon-set-d6wlz: {default-scheduler } Scheduled: Successfully assigned daemonsets-1433/daemon-set-d6wlz to node2 May 29 00:05:37.066: INFO: At 2021-05-29 00:00:33 +0000 UTC - event for daemon-set-d6wlz: {kubelet node2} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine" May 29 00:05:37.066: INFO: At 2021-05-29 00:00:33 +0000 UTC - event for daemon-set-d6wlz: {multus } AddedInterface: Add eth0 [10.244.3.129/24] May 29 00:05:37.066: INFO: At 2021-05-29 00:00:35 +0000 UTC - event for daemon-set-d6wlz: {kubelet node2} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 00:05:37.066: INFO: At 2021-05-29 00:00:35 +0000 UTC - event for daemon-set-d6wlz: {kubelet node2} Failed: Error: ErrImagePull May 29 00:05:37.066: INFO: At 2021-05-29 00:00:35 +0000 UTC - event for daemon-set-d6wlz: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/httpd:2.4.38-alpine" May 29 00:05:37.066: INFO: At 2021-05-29 00:00:35 +0000 UTC - event for daemon-set-d6wlz: {kubelet node2} Failed: Error: ImagePullBackOff May 29 00:05:37.068: INFO: POD NODE PHASE GRACE CONDITIONS May 29 00:05:37.068: INFO: May 29 00:05:37.073: INFO: Logging node info for node master1 May 29 00:05:37.075: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 0aa78934-442a-44a3-8c5c-f827e18dd3d7 85595 0 2021-05-28 19:56:25 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"0a:41:0b:9d:15:5a"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:56:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:02:03 +0000 UTC,LastTransitionTime:2021-05-28 20:02:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 00:05:35 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 00:05:35 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 00:05:35 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 00:05:35 +0000 UTC,LastTransitionTime:2021-05-28 19:59:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7fb2c462cae4b9c990ab2e5c72f7816,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:24c06694-15ae-4da4-9143-144d98afdd8d,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:7f3d9945acdf5d86edd89b2b16fe1f6d63ba8bdb4cab50e66f9bce162df9e388 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:9af6075c93013910787a4e97973da6e0739a86dee1186d7965a5d00b1ac35636 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 00:05:37.076: INFO: Logging kubelet events for node master1 May 29 00:05:37.078: INFO: Logging pods the kubelet thinks is on node master1 May 29 00:05:37.094: INFO: kube-scheduler-master1 started at 2021-05-28 19:57:39 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.094: INFO: Container kube-scheduler ready: true, restart count 0 May 29 00:05:37.094: INFO: kube-apiserver-master1 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.094: INFO: Container kube-apiserver ready: true, restart count 0 May 29 00:05:37.094: INFO: kube-controller-manager-master1 started at 2021-05-28 19:57:39 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.094: INFO: Container kube-controller-manager ready: true, restart count 2 May 29 00:05:37.094: INFO: kube-multus-ds-amd64-n9j8k started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.094: INFO: Container kube-multus ready: true, restart count 1 May 29 00:05:37.094: INFO: docker-registry-docker-registry-56cbc7bc58-rbghz started at 2021-05-28 20:02:55 +0000 UTC (0+2 container statuses recorded) May 29 00:05:37.094: INFO: Container docker-registry ready: true, restart count 0 May 29 00:05:37.094: INFO: Container nginx ready: true, restart count 0 May 29 00:05:37.094: INFO: prometheus-operator-5bb8cb9d8f-7wdtq started at 2021-05-28 20:10:02 +0000 UTC (0+2 container statuses recorded) May 29 00:05:37.094: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 00:05:37.094: INFO: Container prometheus-operator ready: true, restart count 0 May 29 00:05:37.094: INFO: kube-proxy-994p2 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.094: INFO: Container kube-proxy ready: true, restart count 1 May 29 00:05:37.094: INFO: kube-flannel-d54gm started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 00:05:37.094: INFO: Init container install-cni ready: true, restart count 0 May 29 00:05:37.094: INFO: Container kube-flannel ready: true, restart count 2 May 29 00:05:37.094: INFO: coredns-7677f9bb54-zb7h8 started at 2021-05-28 19:59:33 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.094: INFO: Container coredns ready: true, restart count 1 May 29 00:05:37.094: INFO: node-exporter-9b7pq started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 00:05:37.094: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 00:05:37.094: INFO: Container node-exporter ready: true, restart count 0 W0529 00:05:37.107547 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 00:05:37.134: INFO: Latency metrics for node master1 May 29 00:05:37.134: INFO: Logging node info for node master2 May 29 00:05:37.137: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 b80f32b6-a396-4f09-a110-345a08d762ee 85574 0 2021-05-28 19:57:04 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"b2:be:c9:d8:cf:bb"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:57:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:57:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-28 20:06:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:00:49 +0000 UTC,LastTransitionTime:2021-05-28 20:00:49 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 00:05:31 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 00:05:31 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 00:05:31 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 00:05:31 +0000 UTC,LastTransitionTime:2021-05-28 20:00:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2746caf91c53460599f165aa716150cd,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:b63b522f-706f-4e28-a104-c73edcd04319,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 00:05:37.137: INFO: Logging kubelet events for node master2 May 29 00:05:37.139: INFO: Logging pods the kubelet thinks is on node master2 May 29 00:05:37.153: INFO: kube-scheduler-master2 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.153: INFO: Container kube-scheduler ready: true, restart count 3 May 29 00:05:37.153: INFO: kube-proxy-jkbl8 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.153: INFO: Container kube-proxy ready: true, restart count 1 May 29 00:05:37.153: INFO: dns-autoscaler-5b7b5c9b6f-r797x started at 2021-05-28 19:59:31 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.153: INFO: Container autoscaler ready: true, restart count 1 May 29 00:05:37.153: INFO: node-exporter-frch9 started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 00:05:37.153: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 00:05:37.153: INFO: Container node-exporter ready: true, restart count 0 May 29 00:05:37.153: INFO: kube-apiserver-master2 started at 2021-05-28 20:05:31 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.153: INFO: Container kube-apiserver ready: true, restart count 0 May 29 00:05:37.153: INFO: kube-controller-manager-master2 started at 2021-05-28 20:05:41 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.153: INFO: Container kube-controller-manager ready: true, restart count 3 May 29 00:05:37.153: INFO: kube-flannel-xvtkj started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 00:05:37.153: INFO: Init container install-cni ready: true, restart count 0 May 29 00:05:37.153: INFO: Container kube-flannel ready: true, restart count 2 May 29 00:05:37.153: INFO: kube-multus-ds-amd64-qjwcz started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.153: INFO: Container kube-multus ready: true, restart count 1 May 29 00:05:37.153: INFO: node-feature-discovery-controller-5bf5c49849-n9ncl started at 2021-05-28 20:05:52 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.153: INFO: Container nfd-controller ready: true, restart count 0 W0529 00:05:37.164922 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 00:05:37.195: INFO: Latency metrics for node master2 May 29 00:05:37.195: INFO: Logging node info for node master3 May 29 00:05:37.197: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 301b0b5b-fc42-4c78-adb7-75baf6e0cc7e 85584 0 2021-05-28 19:57:14 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"52:fa:ab:49:88:02"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:57:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:57:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:02:12 +0000 UTC,LastTransitionTime:2021-05-28 20:02:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 00:05:33 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 00:05:33 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 00:05:33 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 00:05:33 +0000 UTC,LastTransitionTime:2021-05-28 19:59:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a0cb6c0eb1d842469076fff344213c13,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:c6adcff4-8bf7-40d7-9d14-54b1c6a87bc8,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 00:05:37.197: INFO: Logging kubelet events for node master3 May 29 00:05:37.199: INFO: Logging pods the kubelet thinks is on node master3 May 29 00:05:37.213: INFO: coredns-7677f9bb54-8v554 started at 2021-05-28 19:59:28 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.213: INFO: Container coredns ready: true, restart count 1 May 29 00:05:37.213: INFO: node-exporter-w42s5 started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 00:05:37.213: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 00:05:37.213: INFO: Container node-exporter ready: true, restart count 0 May 29 00:05:37.213: INFO: kube-apiserver-master3 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.213: INFO: Container kube-apiserver ready: true, restart count 0 May 29 00:05:37.213: INFO: kube-controller-manager-master3 started at 2021-05-28 20:06:02 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.213: INFO: Container kube-controller-manager ready: true, restart count 1 May 29 00:05:37.213: INFO: kube-scheduler-master3 started at 2021-05-28 20:01:23 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.213: INFO: Container kube-scheduler ready: true, restart count 1 May 29 00:05:37.213: INFO: kube-proxy-t5bh6 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.213: INFO: Container kube-proxy ready: true, restart count 1 May 29 00:05:37.213: INFO: kube-flannel-zrskq started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 00:05:37.213: INFO: Init container install-cni ready: true, restart count 0 May 29 00:05:37.213: INFO: Container kube-flannel ready: true, restart count 1 May 29 00:05:37.213: INFO: kube-multus-ds-amd64-wqgf7 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.213: INFO: Container kube-multus ready: true, restart count 1 W0529 00:05:37.226906 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 00:05:37.255: INFO: Latency metrics for node master3 May 29 00:05:37.255: INFO: Logging node info for node node1 May 29 00:05:37.257: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 43e51cb4-5acb-42b5-8f26-cd5e977f3829 85577 0 2021-05-28 19:58:22 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"d2:9d:b7:73:58:07"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-28 20:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-28 20:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-28 22:27:50 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-05-28 22:35:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:01:58 +0000 UTC,LastTransitionTime:2021-05-28 20:01:58 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 00:05:31 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 00:05:31 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 00:05:31 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 00:05:31 +0000 UTC,LastTransitionTime:2021-05-28 19:59:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:abe6e95dbfa24a9abd34d8fa2abe7655,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:17719d1f-7df5-4d95-81f3-7d3ac5110ba2,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:d731a0fc49b9ad6125b8d5dcb29da2b60bc940b48eacb6f5a9eb2a55c10598db localhost:30500/barometer-collectd:stable],SizeBytes:1464395058,},ContainerImage{Names:[@ :],SizeBytes:1002495332,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:97953d03767e4c2eb5d156394aeaf4bb0b74f3fd1ad08c303cb7561e272a00ff cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:aa24a0a337084e0747e7c8e97e1131270ae38150e691314f1fa19f4b2f9093c0 golang:alpine3.12],SizeBytes:301156062,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2bec7a43da8efe70cb7cb14020a6b10aecd02c87e020d394de84e6807e2cf620 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392623,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 00:05:37.258: INFO: Logging kubelet events for node node1 May 29 00:05:37.259: INFO: Logging pods the kubelet thinks is on node node1 May 29 00:05:37.279: INFO: node-exporter-khdpg started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 00:05:37.280: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 00:05:37.280: INFO: Container node-exporter ready: true, restart count 0 May 29 00:05:37.280: INFO: kube-flannel-2tjjt started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 00:05:37.280: INFO: Init container install-cni ready: true, restart count 0 May 29 00:05:37.280: INFO: Container kube-flannel ready: true, restart count 2 May 29 00:05:37.280: INFO: collectd-qw9nd started at 2021-05-28 20:16:29 +0000 UTC (0+3 container statuses recorded) May 29 00:05:37.280: INFO: Container collectd ready: true, restart count 0 May 29 00:05:37.280: INFO: Container collectd-exporter ready: true, restart count 0 May 29 00:05:37.280: INFO: Container rbac-proxy ready: true, restart count 0 May 29 00:05:37.280: INFO: kubernetes-dashboard-86c6f9df5b-c5sbq started at 2021-05-28 19:59:33 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.280: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 29 00:05:37.280: INFO: node-feature-discovery-worker-5x4qg started at 2021-05-28 20:05:52 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.280: INFO: Container nfd-worker ready: true, restart count 0 May 29 00:05:37.280: INFO: cmk-init-discover-node1-rvqxm started at 2021-05-28 20:08:32 +0000 UTC (0+3 container statuses recorded) May 29 00:05:37.280: INFO: Container discover ready: false, restart count 0 May 29 00:05:37.280: INFO: Container init ready: false, restart count 0 May 29 00:05:37.280: INFO: Container install ready: false, restart count 0 May 29 00:05:37.280: INFO: prometheus-k8s-0 started at 2021-05-28 20:10:26 +0000 UTC (0+5 container statuses recorded) May 29 00:05:37.280: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 29 00:05:37.280: INFO: Container grafana ready: true, restart count 0 May 29 00:05:37.280: INFO: Container prometheus ready: true, restart count 1 May 29 00:05:37.280: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 29 00:05:37.280: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 29 00:05:37.280: INFO: kube-proxy-lsngv started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.280: INFO: Container kube-proxy ready: true, restart count 2 May 29 00:05:37.280: INFO: kube-multus-ds-amd64-x7826 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.280: INFO: Container kube-multus ready: true, restart count 1 May 29 00:05:37.280: INFO: nginx-proxy-node1 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.280: INFO: Container nginx-proxy ready: true, restart count 1 May 29 00:05:37.280: INFO: kubernetes-metrics-scraper-678c97765c-wblkm started at 2021-05-28 19:59:33 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.280: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 29 00:05:37.280: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zk2pt started at 2021-05-28 20:06:47 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.280: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 00:05:37.280: INFO: cmk-jhzjr started at 2021-05-28 20:09:15 +0000 UTC (0+2 container statuses recorded) May 29 00:05:37.280: INFO: Container nodereport ready: true, restart count 0 May 29 00:05:37.280: INFO: Container reconcile ready: true, restart count 0 W0529 00:05:37.293238 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 00:05:37.340: INFO: Latency metrics for node node1 May 29 00:05:37.340: INFO: Logging node info for node node2 May 29 00:05:37.343: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 3cc89580-b568-4c82-bd1f-200d0823da3b 85603 0 2021-05-28 19:58:22 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"62:22:2c:ae:14:ae"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-28 20:06:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-28 20:08:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-28 22:35:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:01:05 +0000 UTC,LastTransitionTime:2021-05-28 20:01:05 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 00:05:36 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 00:05:36 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 00:05:36 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 00:05:36 +0000 UTC,LastTransitionTime:2021-05-28 19:59:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b2730c4b09814ab9a78e7bc62c820fbb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:f1459072-d21d-46de-a5d9-46ec9349aae0,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:d731a0fc49b9ad6125b8d5dcb29da2b60bc940b48eacb6f5a9eb2a55c10598db localhost:30500/barometer-collectd:stable],SizeBytes:1464395058,},ContainerImage{Names:[localhost:30500/cmk@sha256:97953d03767e4c2eb5d156394aeaf4bb0b74f3fd1ad08c303cb7561e272a00ff localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2bec7a43da8efe70cb7cb14020a6b10aecd02c87e020d394de84e6807e2cf620 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392623,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:7f3d9945acdf5d86edd89b2b16fe1f6d63ba8bdb4cab50e66f9bce162df9e388 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:9af6075c93013910787a4e97973da6e0739a86dee1186d7965a5d00b1ac35636 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 00:05:37.343: INFO: Logging kubelet events for node node2 May 29 00:05:37.345: INFO: Logging pods the kubelet thinks is on node node2 May 29 00:05:37.361: INFO: node-feature-discovery-worker-j2wnf started at 2021-05-28 20:05:52 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.361: INFO: Container nfd-worker ready: true, restart count 0 May 29 00:05:37.361: INFO: cmk-5vxwj started at 2021-05-28 20:09:16 +0000 UTC (0+2 container statuses recorded) May 29 00:05:37.361: INFO: Container nodereport ready: true, restart count 0 May 29 00:05:37.361: INFO: Container reconcile ready: true, restart count 0 May 29 00:05:37.361: INFO: cmk-webhook-6c9d5f8578-6cpp6 started at 2021-05-28 20:09:16 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.361: INFO: Container cmk-webhook ready: true, restart count 0 May 29 00:05:37.361: INFO: node-exporter-sjsht started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 00:05:37.361: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 00:05:37.361: INFO: Container node-exporter ready: true, restart count 0 May 29 00:05:37.361: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-csd8v started at 2021-05-28 20:13:00 +0000 UTC (0+2 container statuses recorded) May 29 00:05:37.362: INFO: Container tas-controller ready: true, restart count 0 May 29 00:05:37.362: INFO: Container tas-extender ready: true, restart count 0 May 29 00:05:37.362: INFO: nginx-proxy-node2 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.362: INFO: Container nginx-proxy ready: true, restart count 2 May 29 00:05:37.362: INFO: collectd-b5dgp started at 2021-05-28 20:16:29 +0000 UTC (0+3 container statuses recorded) May 29 00:05:37.362: INFO: Container collectd ready: true, restart count 0 May 29 00:05:37.362: INFO: Container collectd-exporter ready: true, restart count 0 May 29 00:05:37.362: INFO: Container rbac-proxy ready: true, restart count 0 May 29 00:05:37.362: INFO: kube-proxy-z5czn started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.362: INFO: Container kube-proxy ready: true, restart count 2 May 29 00:05:37.362: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-29vc6 started at 2021-05-28 20:06:47 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.362: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 00:05:37.362: INFO: cmk-init-discover-node2-95cbr started at 2021-05-28 20:08:52 +0000 UTC (0+3 container statuses recorded) May 29 00:05:37.362: INFO: Container discover ready: false, restart count 0 May 29 00:05:37.362: INFO: Container init ready: false, restart count 0 May 29 00:05:37.362: INFO: Container install ready: false, restart count 0 May 29 00:05:37.362: INFO: kube-flannel-d9wsg started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 00:05:37.362: INFO: Init container install-cni ready: true, restart count 2 May 29 00:05:37.362: INFO: Container kube-flannel ready: true, restart count 2 May 29 00:05:37.362: INFO: kube-multus-ds-amd64-c9cj2 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 00:05:37.362: INFO: Container kube-multus ready: true, restart count 1 W0529 00:05:37.373232 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 00:05:37.412: INFO: Latency metrics for node node2 May 29 00:05:37.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1433" for this suite. • Failure [305.025 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon with node affinity [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:229 May 29 00:05:32.468: error waiting for daemon pods to be running on new nodes Unexpected error: <*errors.errorString | 0xc0003021f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:266 ------------------------------ {"msg":"FAILED [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity","total":4,"completed":0,"skipped":5172,"failed":4,"failures":["[sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer =\u003e should not allow an eviction [Serial]","[sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage =\u003e should not allow an eviction [Serial]","[sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete","[sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 29 00:05:37.422: INFO: Running AfterSuite actions on all nodes May 29 00:05:37.422: INFO: Running AfterSuite actions on node 1 May 29 00:05:37.422: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_apps_serial/junit_01.xml {"msg":"Test Suite completed","total":4,"completed":0,"skipped":5480,"failed":4,"failures":["[sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer =\u003e should not allow an eviction [Serial]","[sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage =\u003e should not allow an eviction [Serial]","[sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete","[sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity"]} Summarizing 4 Failures: [Fail] [sig-apps] DisruptionController [It] evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:241 [Fail] [sig-apps] DisruptionController [It] evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:241 [Fail] [sig-apps] Daemon set [Serial] [It] should not update pod when spec was updated and update strategy is OnDelete /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:323 [Fail] [sig-apps] Daemon set [Serial] [It] should run and stop complex daemon with node affinity /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:266 Ran 4 of 5484 Specs in 1812.442 seconds FAIL! -- 0 Passed | 4 Failed | 0 Pending | 5480 Skipped --- FAIL: TestE2E (1812.48s) FAIL Ginkgo ran 1 suite in 30m13.634821456s Test Suite Failed