I0504 16:45:41.509307 21 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0504 16:45:41.509420 21 e2e.go:129] Starting e2e run "b15ba560-f1f4-4c7d-9567-9359e243857e" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1620146740 - Will randomize all specs Will run 17 of 5484 specs May 4 16:45:41.564: INFO: >>> kubeConfig: /root/.kube/config May 4 16:45:41.568: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 4 16:45:41.596: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 4 16:45:41.657: INFO: The status of Pod cmk-init-discover-node1-m8vvw is Succeeded, skipping waiting May 4 16:45:41.657: INFO: The status of Pod cmk-init-discover-node2-zlxzj is Succeeded, skipping waiting May 4 16:45:41.657: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 4 16:45:41.657: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 4 16:45:41.657: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 4 16:45:41.674: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 4 16:45:41.674: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 4 16:45:41.674: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 4 16:45:41.674: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 4 16:45:41.674: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 4 16:45:41.674: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 4 16:45:41.674: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 4 16:45:41.674: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 4 16:45:41.674: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 4 16:45:41.674: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 4 16:45:41.674: INFO: e2e test version: v1.19.10 May 4 16:45:41.674: INFO: kube-apiserver version: v1.19.8 May 4 16:45:41.674: INFO: >>> kubeConfig: /root/.kube/config May 4 16:45:41.679: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:45:41.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces May 4 16:45:41.701: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 4 16:45:41.704: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:46:12.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8649" for this suite. STEP: Destroying namespace "nsdeletetest-7990" for this suite. May 4 16:46:12.784: INFO: Namespace nsdeletetest-7990 was already deleted STEP: Destroying namespace "nsdeletetest-7790" for this suite. • [SLOW TEST:31.108 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":1,"skipped":15,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:46:12.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 4 16:46:12.815: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 4 16:46:12.824: INFO: Waiting for terminating namespaces to be deleted... May 4 16:46:12.827: INFO: Logging pods the apiserver thinks is on node node1 before test May 4 16:46:12.834: INFO: liveness-http from examples-6137 started at 2021-05-04 15:33:56 +0000 UTC (1 container statuses recorded) May 4 16:46:12.834: INFO: Container liveness-http ready: false, restart count 25 May 4 16:46:12.834: INFO: cmk-init-discover-node1-m8vvw from kube-system started at 2021-05-04 14:54:32 +0000 UTC (3 container statuses recorded) May 4 16:46:12.834: INFO: Container discover ready: false, restart count 0 May 4 16:46:12.834: INFO: Container init ready: false, restart count 0 May 4 16:46:12.834: INFO: Container install ready: false, restart count 0 May 4 16:46:12.834: INFO: cmk-slg76 from kube-system started at 2021-05-04 14:55:14 +0000 UTC (2 container statuses recorded) May 4 16:46:12.834: INFO: Container nodereport ready: true, restart count 0 May 4 16:46:12.834: INFO: Container reconcile ready: true, restart count 0 May 4 16:46:12.834: INFO: kube-flannel-d6pbl from kube-system started at 2021-05-04 14:45:37 +0000 UTC (1 container statuses recorded) May 4 16:46:12.835: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:46:12.835: INFO: kube-multus-ds-amd64-pkmbz from kube-system started at 2021-05-04 14:45:46 +0000 UTC (1 container statuses recorded) May 4 16:46:12.835: INFO: Container kube-multus ready: true, restart count 1 May 4 16:46:12.835: INFO: kube-proxy-t2mbn from kube-system started at 2021-05-04 14:45:01 +0000 UTC (1 container statuses recorded) May 4 16:46:12.835: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:46:12.835: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq from kube-system started at 2021-05-04 14:46:10 +0000 UTC (1 container statuses recorded) May 4 16:46:12.835: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:46:12.835: INFO: nginx-proxy-node1 from kube-system started at 2021-05-04 14:51:11 +0000 UTC (1 container statuses recorded) May 4 16:46:12.835: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:46:12.835: INFO: node-feature-discovery-worker-wfgl5 from kube-system started at 2021-05-04 14:51:40 +0000 UTC (1 container statuses recorded) May 4 16:46:12.835: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:46:12.835: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt from kube-system started at 2021-05-04 14:52:50 +0000 UTC (1 container statuses recorded) May 4 16:46:12.835: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:46:12.835: INFO: collectd-4755t from monitoring started at 2021-05-04 15:01:51 +0000 UTC (3 container statuses recorded) May 4 16:46:12.835: INFO: Container collectd ready: true, restart count 0 May 4 16:46:12.835: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:46:12.835: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:46:12.835: INFO: node-exporter-k8qd9 from monitoring started at 2021-05-04 14:56:10 +0000 UTC (2 container statuses recorded) May 4 16:46:12.835: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:46:12.835: INFO: Container node-exporter ready: true, restart count 0 May 4 16:46:12.835: INFO: prometheus-k8s-0 from monitoring started at 2021-05-04 14:56:12 +0000 UTC (5 container statuses recorded) May 4 16:46:12.835: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:46:12.835: INFO: Container grafana ready: true, restart count 0 May 4 16:46:12.835: INFO: Container prometheus ready: true, restart count 1 May 4 16:46:12.835: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:46:12.835: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:46:12.835: INFO: prometheus-operator-5bb8cb9d8f-rrrhf from monitoring started at 2021-05-04 14:56:03 +0000 UTC (2 container statuses recorded) May 4 16:46:12.835: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:46:12.835: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:46:12.835: INFO: Logging pods the apiserver thinks is on node node2 before test May 4 16:46:12.844: INFO: liveness-exec from examples-6137 started at 2021-05-04 15:33:56 +0000 UTC (1 container statuses recorded) May 4 16:46:12.844: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:46:12.844: INFO: cmk-2fmbx from kube-system started at 2021-05-04 14:55:14 +0000 UTC (2 container statuses recorded) May 4 16:46:12.844: INFO: Container nodereport ready: true, restart count 0 May 4 16:46:12.844: INFO: Container reconcile ready: true, restart count 0 May 4 16:46:12.844: INFO: cmk-init-discover-node2-zlxzj from kube-system started at 2021-05-04 14:54:52 +0000 UTC (3 container statuses recorded) May 4 16:46:12.844: INFO: Container discover ready: false, restart count 0 May 4 16:46:12.844: INFO: Container init ready: false, restart count 0 May 4 16:46:12.844: INFO: Container install ready: false, restart count 0 May 4 16:46:12.844: INFO: cmk-webhook-6c9d5f8578-fr595 from kube-system started at 2021-05-04 14:55:15 +0000 UTC (1 container statuses recorded) May 4 16:46:12.844: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:46:12.844: INFO: kube-flannel-lnwkk from kube-system started at 2021-05-04 14:45:37 +0000 UTC (1 container statuses recorded) May 4 16:46:12.844: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:46:12.844: INFO: kube-multus-ds-amd64-7r2s4 from kube-system started at 2021-05-04 14:45:46 +0000 UTC (1 container statuses recorded) May 4 16:46:12.844: INFO: Container kube-multus ready: true, restart count 1 May 4 16:46:12.844: INFO: kube-proxy-rfjjf from kube-system started at 2021-05-04 14:45:01 +0000 UTC (1 container statuses recorded) May 4 16:46:12.844: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:46:12.844: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb from kube-system started at 2021-05-04 14:46:10 +0000 UTC (1 container statuses recorded) May 4 16:46:12.844: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:46:12.844: INFO: nginx-proxy-node2 from kube-system started at 2021-05-04 14:51:11 +0000 UTC (1 container statuses recorded) May 4 16:46:12.844: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:46:12.844: INFO: node-feature-discovery-worker-jzjqs from kube-system started at 2021-05-04 14:51:40 +0000 UTC (1 container statuses recorded) May 4 16:46:12.844: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:46:12.844: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 from kube-system started at 2021-05-04 14:52:50 +0000 UTC (1 container statuses recorded) May 4 16:46:12.844: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:46:12.844: INFO: collectd-dhwfp from monitoring started at 2021-05-04 15:01:51 +0000 UTC (3 container statuses recorded) May 4 16:46:12.844: INFO: Container collectd ready: true, restart count 0 May 4 16:46:12.844: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:46:12.844: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:46:12.844: INFO: node-exporter-5lghf from monitoring started at 2021-05-04 14:56:10 +0000 UTC (2 container statuses recorded) May 4 16:46:12.844: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:46:12.844: INFO: Container node-exporter ready: true, restart count 0 May 4 16:46:12.844: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x from monitoring started at 2021-05-04 14:59:02 +0000 UTC (2 container statuses recorded) May 4 16:46:12.844: INFO: Container tas-controller ready: true, restart count 0 May 4 16:46:12.844: INFO: Container tas-extender ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-9448be59-f20e-4b28-aa56-badaf3252688 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-9448be59-f20e-4b28-aa56-badaf3252688 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-9448be59-f20e-4b28-aa56-badaf3252688 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:51:20.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7260" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.150 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":2,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:51:20.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:51:27.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9112" for this suite. STEP: Destroying namespace "nsdeletetest-5356" for this suite. May 4 16:51:27.030: INFO: Namespace nsdeletetest-5356 was already deleted STEP: Destroying namespace "nsdeletetest-7166" for this suite. • [SLOW TEST:6.091 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":3,"skipped":401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:51:27.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:51:27.084: INFO: Create a RollingUpdate DaemonSet May 4 16:51:27.090: INFO: Check that daemon pods launch on every node of the cluster May 4 16:51:27.094: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:27.094: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:27.094: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:27.095: INFO: Number of nodes with available pods: 0 May 4 16:51:27.095: INFO: Node node1 is running more than one daemon pod May 4 16:51:28.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:28.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:28.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:28.103: INFO: Number of nodes with available pods: 0 May 4 16:51:28.103: INFO: Node node1 is running more than one daemon pod May 4 16:51:29.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:29.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:29.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:29.106: INFO: Number of nodes with available pods: 0 May 4 16:51:29.106: INFO: Node node1 is running more than one daemon pod May 4 16:51:30.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:30.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:30.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:30.104: INFO: Number of nodes with available pods: 0 May 4 16:51:30.104: INFO: Node node1 is running more than one daemon pod May 4 16:51:31.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:31.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:31.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:31.105: INFO: Number of nodes with available pods: 0 May 4 16:51:31.105: INFO: Node node1 is running more than one daemon pod May 4 16:51:32.106: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:32.106: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:32.106: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:32.109: INFO: Number of nodes with available pods: 0 May 4 16:51:32.109: INFO: Node node1 is running more than one daemon pod May 4 16:51:33.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:33.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:33.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:33.105: INFO: Number of nodes with available pods: 0 May 4 16:51:33.105: INFO: Node node1 is running more than one daemon pod May 4 16:51:34.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:34.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:34.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:34.105: INFO: Number of nodes with available pods: 0 May 4 16:51:34.105: INFO: Node node1 is running more than one daemon pod May 4 16:51:35.104: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:35.104: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:35.104: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:35.107: INFO: Number of nodes with available pods: 0 May 4 16:51:35.107: INFO: Node node1 is running more than one daemon pod May 4 16:51:36.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:36.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:36.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:36.104: INFO: Number of nodes with available pods: 0 May 4 16:51:36.104: INFO: Node node1 is running more than one daemon pod May 4 16:51:37.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:37.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:37.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:37.106: INFO: Number of nodes with available pods: 0 May 4 16:51:37.106: INFO: Node node1 is running more than one daemon pod May 4 16:51:38.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:38.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:38.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:38.103: INFO: Number of nodes with available pods: 0 May 4 16:51:38.103: INFO: Node node1 is running more than one daemon pod May 4 16:51:39.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:39.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:39.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:39.106: INFO: Number of nodes with available pods: 0 May 4 16:51:39.106: INFO: Node node1 is running more than one daemon pod May 4 16:51:40.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:40.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:40.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:40.105: INFO: Number of nodes with available pods: 0 May 4 16:51:40.105: INFO: Node node1 is running more than one daemon pod May 4 16:51:41.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:41.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:41.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:41.104: INFO: Number of nodes with available pods: 0 May 4 16:51:41.104: INFO: Node node1 is running more than one daemon pod May 4 16:51:42.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:42.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:42.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:42.105: INFO: Number of nodes with available pods: 0 May 4 16:51:42.105: INFO: Node node1 is running more than one daemon pod May 4 16:51:43.104: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:43.104: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:43.104: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:43.107: INFO: Number of nodes with available pods: 0 May 4 16:51:43.107: INFO: Node node1 is running more than one daemon pod May 4 16:51:44.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:44.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:44.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:44.104: INFO: Number of nodes with available pods: 0 May 4 16:51:44.104: INFO: Node node1 is running more than one daemon pod May 4 16:51:45.105: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:45.105: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:45.106: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:45.112: INFO: Number of nodes with available pods: 0 May 4 16:51:45.112: INFO: Node node1 is running more than one daemon pod May 4 16:51:46.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:46.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:46.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:46.105: INFO: Number of nodes with available pods: 0 May 4 16:51:46.105: INFO: Node node1 is running more than one daemon pod May 4 16:51:47.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:47.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:47.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:47.105: INFO: Number of nodes with available pods: 0 May 4 16:51:47.105: INFO: Node node1 is running more than one daemon pod May 4 16:51:48.104: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:48.104: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:48.104: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:48.107: INFO: Number of nodes with available pods: 0 May 4 16:51:48.107: INFO: Node node1 is running more than one daemon pod May 4 16:51:49.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:49.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:49.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:49.104: INFO: Number of nodes with available pods: 0 May 4 16:51:49.104: INFO: Node node1 is running more than one daemon pod May 4 16:51:50.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:50.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:50.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:50.103: INFO: Number of nodes with available pods: 0 May 4 16:51:50.103: INFO: Node node1 is running more than one daemon pod May 4 16:51:51.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:51.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:51.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:51.107: INFO: Number of nodes with available pods: 0 May 4 16:51:51.107: INFO: Node node1 is running more than one daemon pod May 4 16:51:52.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:52.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:52.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:52.105: INFO: Number of nodes with available pods: 0 May 4 16:51:52.105: INFO: Node node1 is running more than one daemon pod May 4 16:51:53.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:53.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:53.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:53.103: INFO: Number of nodes with available pods: 0 May 4 16:51:53.103: INFO: Node node1 is running more than one daemon pod May 4 16:51:54.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:54.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:54.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:54.103: INFO: Number of nodes with available pods: 0 May 4 16:51:54.103: INFO: Node node1 is running more than one daemon pod May 4 16:51:55.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:55.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:55.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:55.105: INFO: Number of nodes with available pods: 0 May 4 16:51:55.105: INFO: Node node1 is running more than one daemon pod May 4 16:51:56.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:56.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:56.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:56.103: INFO: Number of nodes with available pods: 0 May 4 16:51:56.103: INFO: Node node1 is running more than one daemon pod May 4 16:51:57.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:57.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:57.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:57.105: INFO: Number of nodes with available pods: 0 May 4 16:51:57.105: INFO: Node node1 is running more than one daemon pod May 4 16:51:58.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:58.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:58.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:58.103: INFO: Number of nodes with available pods: 0 May 4 16:51:58.103: INFO: Node node1 is running more than one daemon pod May 4 16:51:59.104: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:59.104: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:59.104: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:51:59.106: INFO: Number of nodes with available pods: 0 May 4 16:51:59.106: INFO: Node node1 is running more than one daemon pod May 4 16:52:00.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:00.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:00.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:00.103: INFO: Number of nodes with available pods: 0 May 4 16:52:00.103: INFO: Node node1 is running more than one daemon pod May 4 16:52:01.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:01.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:01.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:01.103: INFO: Number of nodes with available pods: 0 May 4 16:52:01.103: INFO: Node node1 is running more than one daemon pod May 4 16:52:02.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:02.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:02.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:02.105: INFO: Number of nodes with available pods: 0 May 4 16:52:02.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:03.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:03.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:03.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:03.105: INFO: Number of nodes with available pods: 0 May 4 16:52:03.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:04.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:04.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:04.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:04.103: INFO: Number of nodes with available pods: 0 May 4 16:52:04.103: INFO: Node node1 is running more than one daemon pod May 4 16:52:05.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:05.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:05.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:05.104: INFO: Number of nodes with available pods: 0 May 4 16:52:05.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:06.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:06.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:06.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:06.105: INFO: Number of nodes with available pods: 0 May 4 16:52:06.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:07.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:07.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:07.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:07.104: INFO: Number of nodes with available pods: 0 May 4 16:52:07.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:08.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:08.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:08.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:08.106: INFO: Number of nodes with available pods: 0 May 4 16:52:08.106: INFO: Node node1 is running more than one daemon pod May 4 16:52:09.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:09.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:09.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:09.104: INFO: Number of nodes with available pods: 0 May 4 16:52:09.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:10.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:10.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:10.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:10.104: INFO: Number of nodes with available pods: 0 May 4 16:52:10.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:11.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:11.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:11.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:11.105: INFO: Number of nodes with available pods: 0 May 4 16:52:11.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:12.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:12.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:12.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:12.104: INFO: Number of nodes with available pods: 0 May 4 16:52:12.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:13.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:13.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:13.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:13.104: INFO: Number of nodes with available pods: 0 May 4 16:52:13.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:14.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:14.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:14.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:14.104: INFO: Number of nodes with available pods: 0 May 4 16:52:14.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:15.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:15.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:15.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:15.103: INFO: Number of nodes with available pods: 0 May 4 16:52:15.103: INFO: Node node1 is running more than one daemon pod May 4 16:52:16.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:16.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:16.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:16.104: INFO: Number of nodes with available pods: 0 May 4 16:52:16.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:17.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:17.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:17.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:17.106: INFO: Number of nodes with available pods: 0 May 4 16:52:17.106: INFO: Node node1 is running more than one daemon pod May 4 16:52:18.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:18.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:18.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:18.105: INFO: Number of nodes with available pods: 0 May 4 16:52:18.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:19.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:19.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:19.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:19.104: INFO: Number of nodes with available pods: 0 May 4 16:52:19.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:20.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:20.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:20.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:20.104: INFO: Number of nodes with available pods: 0 May 4 16:52:20.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:21.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:21.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:21.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:21.104: INFO: Number of nodes with available pods: 0 May 4 16:52:21.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:22.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:22.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:22.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:22.105: INFO: Number of nodes with available pods: 0 May 4 16:52:22.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:23.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:23.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:23.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:23.104: INFO: Number of nodes with available pods: 0 May 4 16:52:23.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:24.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:24.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:24.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:24.105: INFO: Number of nodes with available pods: 0 May 4 16:52:24.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:25.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:25.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:25.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:25.103: INFO: Number of nodes with available pods: 0 May 4 16:52:25.103: INFO: Node node1 is running more than one daemon pod May 4 16:52:26.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:26.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:26.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:26.106: INFO: Number of nodes with available pods: 0 May 4 16:52:26.106: INFO: Node node1 is running more than one daemon pod May 4 16:52:27.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:27.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:27.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:27.105: INFO: Number of nodes with available pods: 0 May 4 16:52:27.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:28.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:28.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:28.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:28.105: INFO: Number of nodes with available pods: 0 May 4 16:52:28.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:29.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:29.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:29.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:29.105: INFO: Number of nodes with available pods: 0 May 4 16:52:29.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:30.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:30.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:30.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:30.104: INFO: Number of nodes with available pods: 0 May 4 16:52:30.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:31.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:31.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:31.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:31.104: INFO: Number of nodes with available pods: 0 May 4 16:52:31.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:32.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:32.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:32.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:32.104: INFO: Number of nodes with available pods: 0 May 4 16:52:32.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:33.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:33.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:33.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:33.105: INFO: Number of nodes with available pods: 0 May 4 16:52:33.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:34.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:34.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:34.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:34.105: INFO: Number of nodes with available pods: 0 May 4 16:52:34.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:35.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:35.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:35.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:35.104: INFO: Number of nodes with available pods: 0 May 4 16:52:35.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:36.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:36.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:36.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:36.106: INFO: Number of nodes with available pods: 0 May 4 16:52:36.106: INFO: Node node1 is running more than one daemon pod May 4 16:52:37.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:37.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:37.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:37.104: INFO: Number of nodes with available pods: 0 May 4 16:52:37.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:38.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:38.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:38.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:38.106: INFO: Number of nodes with available pods: 0 May 4 16:52:38.106: INFO: Node node1 is running more than one daemon pod May 4 16:52:39.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:39.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:39.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:39.103: INFO: Number of nodes with available pods: 0 May 4 16:52:39.103: INFO: Node node1 is running more than one daemon pod May 4 16:52:40.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:40.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:40.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:40.103: INFO: Number of nodes with available pods: 0 May 4 16:52:40.103: INFO: Node node1 is running more than one daemon pod May 4 16:52:41.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:41.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:41.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:41.104: INFO: Number of nodes with available pods: 0 May 4 16:52:41.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:42.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:42.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:42.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:42.105: INFO: Number of nodes with available pods: 0 May 4 16:52:42.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:43.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:43.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:43.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:43.105: INFO: Number of nodes with available pods: 0 May 4 16:52:43.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:44.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:44.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:44.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:44.105: INFO: Number of nodes with available pods: 0 May 4 16:52:44.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:45.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:45.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:45.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:45.103: INFO: Number of nodes with available pods: 0 May 4 16:52:45.103: INFO: Node node1 is running more than one daemon pod May 4 16:52:46.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:46.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:46.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:46.105: INFO: Number of nodes with available pods: 0 May 4 16:52:46.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:47.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:47.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:47.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:47.105: INFO: Number of nodes with available pods: 0 May 4 16:52:47.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:48.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:48.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:48.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:48.104: INFO: Number of nodes with available pods: 0 May 4 16:52:48.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:49.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:49.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:49.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:49.106: INFO: Number of nodes with available pods: 0 May 4 16:52:49.106: INFO: Node node1 is running more than one daemon pod May 4 16:52:50.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:50.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:50.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:50.105: INFO: Number of nodes with available pods: 0 May 4 16:52:50.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:51.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:51.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:51.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:51.104: INFO: Number of nodes with available pods: 0 May 4 16:52:51.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:52.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:52.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:52.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:52.105: INFO: Number of nodes with available pods: 0 May 4 16:52:52.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:53.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:53.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:53.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:53.105: INFO: Number of nodes with available pods: 0 May 4 16:52:53.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:54.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:54.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:54.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:54.103: INFO: Number of nodes with available pods: 0 May 4 16:52:54.103: INFO: Node node1 is running more than one daemon pod May 4 16:52:55.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:55.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:55.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:55.104: INFO: Number of nodes with available pods: 0 May 4 16:52:55.104: INFO: Node node1 is running more than one daemon pod May 4 16:52:56.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:56.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:56.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:56.104: INFO: Number of nodes with available pods: 0 May 4 16:52:56.105: INFO: Node node1 is running more than one daemon pod May 4 16:52:57.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:57.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:57.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:57.106: INFO: Number of nodes with available pods: 0 May 4 16:52:57.106: INFO: Node node1 is running more than one daemon pod May 4 16:52:58.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:58.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:58.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:58.106: INFO: Number of nodes with available pods: 0 May 4 16:52:58.106: INFO: Node node1 is running more than one daemon pod May 4 16:52:59.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:59.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:59.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:52:59.104: INFO: Number of nodes with available pods: 0 May 4 16:52:59.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:00.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:00.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:00.104: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:00.106: INFO: Number of nodes with available pods: 0 May 4 16:53:00.106: INFO: Node node1 is running more than one daemon pod May 4 16:53:01.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:01.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:01.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:01.105: INFO: Number of nodes with available pods: 0 May 4 16:53:01.105: INFO: Node node1 is running more than one daemon pod May 4 16:53:02.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:02.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:02.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:02.105: INFO: Number of nodes with available pods: 0 May 4 16:53:02.105: INFO: Node node1 is running more than one daemon pod May 4 16:53:03.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:03.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:03.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:03.105: INFO: Number of nodes with available pods: 0 May 4 16:53:03.105: INFO: Node node1 is running more than one daemon pod May 4 16:53:04.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:04.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:04.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:04.105: INFO: Number of nodes with available pods: 0 May 4 16:53:04.105: INFO: Node node1 is running more than one daemon pod May 4 16:53:05.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:05.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:05.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:05.104: INFO: Number of nodes with available pods: 0 May 4 16:53:05.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:06.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:06.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:06.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:06.104: INFO: Number of nodes with available pods: 0 May 4 16:53:06.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:07.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:07.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:07.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:07.105: INFO: Number of nodes with available pods: 0 May 4 16:53:07.105: INFO: Node node1 is running more than one daemon pod May 4 16:53:08.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:08.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:08.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:08.107: INFO: Number of nodes with available pods: 0 May 4 16:53:08.107: INFO: Node node1 is running more than one daemon pod May 4 16:53:09.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:09.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:09.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:09.106: INFO: Number of nodes with available pods: 0 May 4 16:53:09.106: INFO: Node node1 is running more than one daemon pod May 4 16:53:10.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:10.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:10.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:10.104: INFO: Number of nodes with available pods: 0 May 4 16:53:10.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:11.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:11.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:11.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:11.106: INFO: Number of nodes with available pods: 0 May 4 16:53:11.106: INFO: Node node1 is running more than one daemon pod May 4 16:53:12.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:12.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:12.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:12.103: INFO: Number of nodes with available pods: 0 May 4 16:53:12.103: INFO: Node node1 is running more than one daemon pod May 4 16:53:13.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:13.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:13.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:13.107: INFO: Number of nodes with available pods: 0 May 4 16:53:13.107: INFO: Node node1 is running more than one daemon pod May 4 16:53:14.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:14.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:14.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:14.106: INFO: Number of nodes with available pods: 0 May 4 16:53:14.106: INFO: Node node1 is running more than one daemon pod May 4 16:53:15.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:15.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:15.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:15.104: INFO: Number of nodes with available pods: 0 May 4 16:53:15.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:16.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:16.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:16.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:16.103: INFO: Number of nodes with available pods: 0 May 4 16:53:16.103: INFO: Node node1 is running more than one daemon pod May 4 16:53:17.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:17.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:17.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:17.105: INFO: Number of nodes with available pods: 0 May 4 16:53:17.105: INFO: Node node1 is running more than one daemon pod May 4 16:53:18.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:18.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:18.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:18.104: INFO: Number of nodes with available pods: 0 May 4 16:53:18.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:19.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:19.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:19.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:19.105: INFO: Number of nodes with available pods: 0 May 4 16:53:19.105: INFO: Node node1 is running more than one daemon pod May 4 16:53:20.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:20.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:20.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:20.104: INFO: Number of nodes with available pods: 0 May 4 16:53:20.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:21.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:21.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:21.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:21.104: INFO: Number of nodes with available pods: 0 May 4 16:53:21.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:22.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:22.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:22.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:22.103: INFO: Number of nodes with available pods: 0 May 4 16:53:22.103: INFO: Node node1 is running more than one daemon pod May 4 16:53:23.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:23.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:23.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:23.104: INFO: Number of nodes with available pods: 0 May 4 16:53:23.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:24.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:24.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:24.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:24.104: INFO: Number of nodes with available pods: 0 May 4 16:53:24.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:25.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:25.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:25.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:25.105: INFO: Number of nodes with available pods: 0 May 4 16:53:25.105: INFO: Node node1 is running more than one daemon pod May 4 16:53:26.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:26.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:26.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:26.104: INFO: Number of nodes with available pods: 0 May 4 16:53:26.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:27.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:27.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:27.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:27.104: INFO: Number of nodes with available pods: 0 May 4 16:53:27.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:28.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:28.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:28.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:28.105: INFO: Number of nodes with available pods: 0 May 4 16:53:28.105: INFO: Node node1 is running more than one daemon pod May 4 16:53:29.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:29.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:29.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:29.106: INFO: Number of nodes with available pods: 0 May 4 16:53:29.106: INFO: Node node1 is running more than one daemon pod May 4 16:53:30.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:30.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:30.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:30.105: INFO: Number of nodes with available pods: 0 May 4 16:53:30.105: INFO: Node node1 is running more than one daemon pod May 4 16:53:31.104: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:31.104: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:31.104: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:31.107: INFO: Number of nodes with available pods: 0 May 4 16:53:31.107: INFO: Node node1 is running more than one daemon pod May 4 16:53:32.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:32.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:32.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:32.103: INFO: Number of nodes with available pods: 0 May 4 16:53:32.103: INFO: Node node1 is running more than one daemon pod May 4 16:53:33.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:33.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:33.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:33.104: INFO: Number of nodes with available pods: 0 May 4 16:53:33.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:34.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:34.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:34.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:34.106: INFO: Number of nodes with available pods: 0 May 4 16:53:34.106: INFO: Node node1 is running more than one daemon pod May 4 16:53:35.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:35.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:35.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:35.106: INFO: Number of nodes with available pods: 0 May 4 16:53:35.106: INFO: Node node1 is running more than one daemon pod May 4 16:53:36.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:36.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:36.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:36.105: INFO: Number of nodes with available pods: 0 May 4 16:53:36.105: INFO: Node node1 is running more than one daemon pod May 4 16:53:37.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:37.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:37.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:37.106: INFO: Number of nodes with available pods: 0 May 4 16:53:37.106: INFO: Node node1 is running more than one daemon pod May 4 16:53:38.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:38.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:38.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:38.103: INFO: Number of nodes with available pods: 0 May 4 16:53:38.103: INFO: Node node1 is running more than one daemon pod May 4 16:53:39.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:39.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:39.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:39.103: INFO: Number of nodes with available pods: 0 May 4 16:53:39.103: INFO: Node node1 is running more than one daemon pod May 4 16:53:40.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:40.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:40.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:40.103: INFO: Number of nodes with available pods: 0 May 4 16:53:40.103: INFO: Node node1 is running more than one daemon pod May 4 16:53:41.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:41.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:41.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:41.103: INFO: Number of nodes with available pods: 0 May 4 16:53:41.103: INFO: Node node1 is running more than one daemon pod May 4 16:53:42.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:42.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:42.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:42.103: INFO: Number of nodes with available pods: 0 May 4 16:53:42.103: INFO: Node node1 is running more than one daemon pod May 4 16:53:43.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:43.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:43.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:43.104: INFO: Number of nodes with available pods: 0 May 4 16:53:43.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:44.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:44.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:44.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:44.103: INFO: Number of nodes with available pods: 0 May 4 16:53:44.103: INFO: Node node1 is running more than one daemon pod May 4 16:53:45.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:45.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:45.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:45.104: INFO: Number of nodes with available pods: 0 May 4 16:53:45.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:46.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:46.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:46.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:46.105: INFO: Number of nodes with available pods: 0 May 4 16:53:46.105: INFO: Node node1 is running more than one daemon pod May 4 16:53:47.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:47.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:47.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:47.104: INFO: Number of nodes with available pods: 0 May 4 16:53:47.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:48.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:48.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:48.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:48.104: INFO: Number of nodes with available pods: 0 May 4 16:53:48.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:49.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:49.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:49.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:49.105: INFO: Number of nodes with available pods: 0 May 4 16:53:49.105: INFO: Node node1 is running more than one daemon pod May 4 16:53:50.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:50.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:50.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:50.103: INFO: Number of nodes with available pods: 0 May 4 16:53:50.103: INFO: Node node1 is running more than one daemon pod May 4 16:53:51.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:51.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:51.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:51.103: INFO: Number of nodes with available pods: 0 May 4 16:53:51.103: INFO: Node node1 is running more than one daemon pod May 4 16:53:52.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:52.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:52.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:52.104: INFO: Number of nodes with available pods: 0 May 4 16:53:52.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:53.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:53.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:53.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:53.103: INFO: Number of nodes with available pods: 0 May 4 16:53:53.103: INFO: Node node1 is running more than one daemon pod May 4 16:53:54.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:54.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:54.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:54.105: INFO: Number of nodes with available pods: 0 May 4 16:53:54.105: INFO: Node node1 is running more than one daemon pod May 4 16:53:55.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:55.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:55.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:55.104: INFO: Number of nodes with available pods: 0 May 4 16:53:55.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:56.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:56.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:56.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:56.104: INFO: Number of nodes with available pods: 0 May 4 16:53:56.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:57.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:57.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:57.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:57.104: INFO: Number of nodes with available pods: 0 May 4 16:53:57.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:58.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:58.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:58.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:58.104: INFO: Number of nodes with available pods: 0 May 4 16:53:58.104: INFO: Node node1 is running more than one daemon pod May 4 16:53:59.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:59.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:59.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:53:59.104: INFO: Number of nodes with available pods: 0 May 4 16:53:59.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:00.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:00.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:00.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:00.104: INFO: Number of nodes with available pods: 0 May 4 16:54:00.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:01.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:01.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:01.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:01.104: INFO: Number of nodes with available pods: 0 May 4 16:54:01.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:02.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:02.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:02.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:02.104: INFO: Number of nodes with available pods: 0 May 4 16:54:02.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:03.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:03.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:03.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:03.105: INFO: Number of nodes with available pods: 0 May 4 16:54:03.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:04.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:04.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:04.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:04.103: INFO: Number of nodes with available pods: 0 May 4 16:54:04.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:05.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:05.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:05.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:05.105: INFO: Number of nodes with available pods: 0 May 4 16:54:05.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:06.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:06.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:06.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:06.103: INFO: Number of nodes with available pods: 0 May 4 16:54:06.103: INFO: Node node1 is running more than one daemon pod May 4 16:54:07.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:07.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:07.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:07.103: INFO: Number of nodes with available pods: 0 May 4 16:54:07.103: INFO: Node node1 is running more than one daemon pod May 4 16:54:08.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:08.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:08.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:08.104: INFO: Number of nodes with available pods: 0 May 4 16:54:08.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:09.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:09.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:09.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:09.106: INFO: Number of nodes with available pods: 0 May 4 16:54:09.106: INFO: Node node1 is running more than one daemon pod May 4 16:54:10.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:10.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:10.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:10.105: INFO: Number of nodes with available pods: 0 May 4 16:54:10.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:11.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:11.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:11.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:11.104: INFO: Number of nodes with available pods: 0 May 4 16:54:11.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:12.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:12.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:12.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:12.103: INFO: Number of nodes with available pods: 0 May 4 16:54:12.103: INFO: Node node1 is running more than one daemon pod May 4 16:54:13.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:13.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:13.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:13.105: INFO: Number of nodes with available pods: 0 May 4 16:54:13.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:14.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:14.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:14.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:14.104: INFO: Number of nodes with available pods: 0 May 4 16:54:14.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:15.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:15.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:15.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:15.105: INFO: Number of nodes with available pods: 0 May 4 16:54:15.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:16.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:16.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:16.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:16.106: INFO: Number of nodes with available pods: 0 May 4 16:54:16.106: INFO: Node node1 is running more than one daemon pod May 4 16:54:17.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:17.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:17.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:17.103: INFO: Number of nodes with available pods: 0 May 4 16:54:17.103: INFO: Node node1 is running more than one daemon pod May 4 16:54:18.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:18.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:18.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:18.104: INFO: Number of nodes with available pods: 0 May 4 16:54:18.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:19.104: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:19.104: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:19.104: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:19.108: INFO: Number of nodes with available pods: 0 May 4 16:54:19.108: INFO: Node node1 is running more than one daemon pod May 4 16:54:20.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:20.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:20.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:20.107: INFO: Number of nodes with available pods: 0 May 4 16:54:20.107: INFO: Node node1 is running more than one daemon pod May 4 16:54:21.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:21.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:21.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:21.104: INFO: Number of nodes with available pods: 0 May 4 16:54:21.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:22.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:22.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:22.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:22.103: INFO: Number of nodes with available pods: 0 May 4 16:54:22.103: INFO: Node node1 is running more than one daemon pod May 4 16:54:23.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:23.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:23.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:23.104: INFO: Number of nodes with available pods: 0 May 4 16:54:23.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:24.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:24.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:24.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:24.103: INFO: Number of nodes with available pods: 0 May 4 16:54:24.103: INFO: Node node1 is running more than one daemon pod May 4 16:54:25.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:25.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:25.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:25.104: INFO: Number of nodes with available pods: 0 May 4 16:54:25.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:26.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:26.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:26.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:26.103: INFO: Number of nodes with available pods: 0 May 4 16:54:26.103: INFO: Node node1 is running more than one daemon pod May 4 16:54:27.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:27.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:27.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:27.104: INFO: Number of nodes with available pods: 0 May 4 16:54:27.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:28.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:28.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:28.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:28.103: INFO: Number of nodes with available pods: 0 May 4 16:54:28.103: INFO: Node node1 is running more than one daemon pod May 4 16:54:29.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:29.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:29.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:29.104: INFO: Number of nodes with available pods: 0 May 4 16:54:29.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:30.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:30.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:30.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:30.104: INFO: Number of nodes with available pods: 0 May 4 16:54:30.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:31.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:31.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:31.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:31.103: INFO: Number of nodes with available pods: 0 May 4 16:54:31.103: INFO: Node node1 is running more than one daemon pod May 4 16:54:32.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:32.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:32.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:32.103: INFO: Number of nodes with available pods: 0 May 4 16:54:32.103: INFO: Node node1 is running more than one daemon pod May 4 16:54:33.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:33.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:33.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:33.104: INFO: Number of nodes with available pods: 0 May 4 16:54:33.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:34.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:34.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:34.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:34.105: INFO: Number of nodes with available pods: 0 May 4 16:54:34.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:35.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:35.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:35.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:35.105: INFO: Number of nodes with available pods: 0 May 4 16:54:35.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:36.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:36.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:36.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:36.105: INFO: Number of nodes with available pods: 0 May 4 16:54:36.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:37.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:37.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:37.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:37.104: INFO: Number of nodes with available pods: 0 May 4 16:54:37.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:38.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:38.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:38.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:38.107: INFO: Number of nodes with available pods: 0 May 4 16:54:38.107: INFO: Node node1 is running more than one daemon pod May 4 16:54:39.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:39.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:39.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:39.105: INFO: Number of nodes with available pods: 0 May 4 16:54:39.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:40.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:40.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:40.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:40.105: INFO: Number of nodes with available pods: 0 May 4 16:54:40.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:41.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:41.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:41.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:41.106: INFO: Number of nodes with available pods: 0 May 4 16:54:41.106: INFO: Node node1 is running more than one daemon pod May 4 16:54:42.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:42.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:42.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:42.105: INFO: Number of nodes with available pods: 0 May 4 16:54:42.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:43.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:43.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:43.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:43.105: INFO: Number of nodes with available pods: 0 May 4 16:54:43.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:44.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:44.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:44.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:44.106: INFO: Number of nodes with available pods: 0 May 4 16:54:44.106: INFO: Node node1 is running more than one daemon pod May 4 16:54:45.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:45.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:45.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:45.104: INFO: Number of nodes with available pods: 0 May 4 16:54:45.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:46.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:46.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:46.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:46.106: INFO: Number of nodes with available pods: 0 May 4 16:54:46.106: INFO: Node node1 is running more than one daemon pod May 4 16:54:47.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:47.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:47.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:47.106: INFO: Number of nodes with available pods: 0 May 4 16:54:47.106: INFO: Node node1 is running more than one daemon pod May 4 16:54:48.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:48.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:48.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:48.104: INFO: Number of nodes with available pods: 0 May 4 16:54:48.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:49.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:49.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:49.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:49.107: INFO: Number of nodes with available pods: 0 May 4 16:54:49.107: INFO: Node node1 is running more than one daemon pod May 4 16:54:50.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:50.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:50.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:50.106: INFO: Number of nodes with available pods: 0 May 4 16:54:50.106: INFO: Node node1 is running more than one daemon pod May 4 16:54:51.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:51.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:51.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:51.104: INFO: Number of nodes with available pods: 0 May 4 16:54:51.104: INFO: Node node1 is running more than one daemon pod May 4 16:54:52.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:52.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:52.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:52.106: INFO: Number of nodes with available pods: 0 May 4 16:54:52.106: INFO: Node node1 is running more than one daemon pod May 4 16:54:53.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:53.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:53.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:53.105: INFO: Number of nodes with available pods: 0 May 4 16:54:53.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:54.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:54.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:54.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:54.105: INFO: Number of nodes with available pods: 0 May 4 16:54:54.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:55.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:55.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:55.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:55.106: INFO: Number of nodes with available pods: 0 May 4 16:54:55.106: INFO: Node node1 is running more than one daemon pod May 4 16:54:56.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:56.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:56.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:56.105: INFO: Number of nodes with available pods: 0 May 4 16:54:56.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:57.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:57.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:57.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:57.105: INFO: Number of nodes with available pods: 0 May 4 16:54:57.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:58.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:58.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:58.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:58.105: INFO: Number of nodes with available pods: 0 May 4 16:54:58.105: INFO: Node node1 is running more than one daemon pod May 4 16:54:59.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:59.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:59.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:54:59.105: INFO: Number of nodes with available pods: 0 May 4 16:54:59.105: INFO: Node node1 is running more than one daemon pod May 4 16:55:00.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:00.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:00.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:00.105: INFO: Number of nodes with available pods: 0 May 4 16:55:00.105: INFO: Node node1 is running more than one daemon pod May 4 16:55:01.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:01.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:01.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:01.106: INFO: Number of nodes with available pods: 0 May 4 16:55:01.106: INFO: Node node1 is running more than one daemon pod May 4 16:55:02.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:02.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:02.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:02.103: INFO: Number of nodes with available pods: 0 May 4 16:55:02.103: INFO: Node node1 is running more than one daemon pod May 4 16:55:03.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:03.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:03.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:03.104: INFO: Number of nodes with available pods: 0 May 4 16:55:03.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:04.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:04.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:04.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:04.103: INFO: Number of nodes with available pods: 0 May 4 16:55:04.103: INFO: Node node1 is running more than one daemon pod May 4 16:55:05.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:05.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:05.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:05.104: INFO: Number of nodes with available pods: 0 May 4 16:55:05.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:06.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:06.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:06.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:06.106: INFO: Number of nodes with available pods: 0 May 4 16:55:06.106: INFO: Node node1 is running more than one daemon pod May 4 16:55:07.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:07.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:07.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:07.105: INFO: Number of nodes with available pods: 0 May 4 16:55:07.105: INFO: Node node1 is running more than one daemon pod May 4 16:55:08.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:08.104: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:08.104: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:08.107: INFO: Number of nodes with available pods: 0 May 4 16:55:08.107: INFO: Node node1 is running more than one daemon pod May 4 16:55:09.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:09.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:09.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:09.107: INFO: Number of nodes with available pods: 0 May 4 16:55:09.107: INFO: Node node1 is running more than one daemon pod May 4 16:55:10.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:10.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:10.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:10.103: INFO: Number of nodes with available pods: 0 May 4 16:55:10.103: INFO: Node node1 is running more than one daemon pod May 4 16:55:11.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:11.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:11.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:11.106: INFO: Number of nodes with available pods: 0 May 4 16:55:11.106: INFO: Node node1 is running more than one daemon pod May 4 16:55:12.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:12.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:12.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:12.104: INFO: Number of nodes with available pods: 0 May 4 16:55:12.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:13.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:13.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:13.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:13.104: INFO: Number of nodes with available pods: 0 May 4 16:55:13.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:14.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:14.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:14.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:14.105: INFO: Number of nodes with available pods: 0 May 4 16:55:14.105: INFO: Node node1 is running more than one daemon pod May 4 16:55:15.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:15.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:15.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:15.103: INFO: Number of nodes with available pods: 0 May 4 16:55:15.103: INFO: Node node1 is running more than one daemon pod May 4 16:55:16.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:16.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:16.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:16.105: INFO: Number of nodes with available pods: 0 May 4 16:55:16.105: INFO: Node node1 is running more than one daemon pod May 4 16:55:17.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:17.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:17.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:17.106: INFO: Number of nodes with available pods: 0 May 4 16:55:17.106: INFO: Node node1 is running more than one daemon pod May 4 16:55:18.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:18.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:18.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:18.103: INFO: Number of nodes with available pods: 0 May 4 16:55:18.103: INFO: Node node1 is running more than one daemon pod May 4 16:55:19.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:19.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:19.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:19.103: INFO: Number of nodes with available pods: 0 May 4 16:55:19.103: INFO: Node node1 is running more than one daemon pod May 4 16:55:20.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:20.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:20.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:20.104: INFO: Number of nodes with available pods: 0 May 4 16:55:20.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:21.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:21.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:21.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:21.104: INFO: Number of nodes with available pods: 0 May 4 16:55:21.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:22.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:22.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:22.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:22.103: INFO: Number of nodes with available pods: 0 May 4 16:55:22.103: INFO: Node node1 is running more than one daemon pod May 4 16:55:23.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:23.104: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:23.104: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:23.106: INFO: Number of nodes with available pods: 0 May 4 16:55:23.106: INFO: Node node1 is running more than one daemon pod May 4 16:55:24.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:24.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:24.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:24.104: INFO: Number of nodes with available pods: 0 May 4 16:55:24.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:25.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:25.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:25.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:25.106: INFO: Number of nodes with available pods: 0 May 4 16:55:25.106: INFO: Node node1 is running more than one daemon pod May 4 16:55:26.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:26.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:26.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:26.105: INFO: Number of nodes with available pods: 0 May 4 16:55:26.105: INFO: Node node1 is running more than one daemon pod May 4 16:55:27.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:27.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:27.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:27.106: INFO: Number of nodes with available pods: 0 May 4 16:55:27.106: INFO: Node node1 is running more than one daemon pod May 4 16:55:28.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:28.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:28.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:28.104: INFO: Number of nodes with available pods: 0 May 4 16:55:28.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:29.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:29.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:29.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:29.106: INFO: Number of nodes with available pods: 0 May 4 16:55:29.106: INFO: Node node1 is running more than one daemon pod May 4 16:55:30.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:30.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:30.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:30.104: INFO: Number of nodes with available pods: 0 May 4 16:55:30.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:31.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:31.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:31.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:31.104: INFO: Number of nodes with available pods: 0 May 4 16:55:31.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:32.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:32.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:32.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:32.104: INFO: Number of nodes with available pods: 0 May 4 16:55:32.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:33.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:33.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:33.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:33.106: INFO: Number of nodes with available pods: 0 May 4 16:55:33.106: INFO: Node node1 is running more than one daemon pod May 4 16:55:34.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:34.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:34.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:34.104: INFO: Number of nodes with available pods: 0 May 4 16:55:34.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:35.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:35.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:35.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:35.105: INFO: Number of nodes with available pods: 0 May 4 16:55:35.105: INFO: Node node1 is running more than one daemon pod May 4 16:55:36.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:36.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:36.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:36.104: INFO: Number of nodes with available pods: 0 May 4 16:55:36.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:37.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:37.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:37.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:37.106: INFO: Number of nodes with available pods: 0 May 4 16:55:37.106: INFO: Node node1 is running more than one daemon pod May 4 16:55:38.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:38.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:38.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:38.105: INFO: Number of nodes with available pods: 0 May 4 16:55:38.105: INFO: Node node1 is running more than one daemon pod May 4 16:55:39.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:39.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:39.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:39.105: INFO: Number of nodes with available pods: 0 May 4 16:55:39.105: INFO: Node node1 is running more than one daemon pod May 4 16:55:40.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:40.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:40.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:40.104: INFO: Number of nodes with available pods: 0 May 4 16:55:40.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:41.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:41.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:41.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:41.103: INFO: Number of nodes with available pods: 0 May 4 16:55:41.103: INFO: Node node1 is running more than one daemon pod May 4 16:55:42.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:42.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:42.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:42.106: INFO: Number of nodes with available pods: 0 May 4 16:55:42.106: INFO: Node node1 is running more than one daemon pod May 4 16:55:43.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:43.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:43.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:43.105: INFO: Number of nodes with available pods: 0 May 4 16:55:43.105: INFO: Node node1 is running more than one daemon pod May 4 16:55:44.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:44.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:44.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:44.105: INFO: Number of nodes with available pods: 0 May 4 16:55:44.105: INFO: Node node1 is running more than one daemon pod May 4 16:55:45.104: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:45.104: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:45.104: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:45.106: INFO: Number of nodes with available pods: 0 May 4 16:55:45.106: INFO: Node node1 is running more than one daemon pod May 4 16:55:46.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:46.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:46.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:46.104: INFO: Number of nodes with available pods: 0 May 4 16:55:46.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:47.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:47.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:47.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:47.106: INFO: Number of nodes with available pods: 0 May 4 16:55:47.106: INFO: Node node1 is running more than one daemon pod May 4 16:55:48.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:48.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:48.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:48.105: INFO: Number of nodes with available pods: 0 May 4 16:55:48.105: INFO: Node node1 is running more than one daemon pod May 4 16:55:49.104: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:49.104: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:49.104: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:49.107: INFO: Number of nodes with available pods: 0 May 4 16:55:49.107: INFO: Node node1 is running more than one daemon pod May 4 16:55:50.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:50.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:50.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:50.105: INFO: Number of nodes with available pods: 0 May 4 16:55:50.105: INFO: Node node1 is running more than one daemon pod May 4 16:55:51.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:51.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:51.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:51.105: INFO: Number of nodes with available pods: 0 May 4 16:55:51.105: INFO: Node node1 is running more than one daemon pod May 4 16:55:52.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:52.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:52.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:52.106: INFO: Number of nodes with available pods: 0 May 4 16:55:52.106: INFO: Node node1 is running more than one daemon pod May 4 16:55:53.104: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:53.104: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:53.104: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:53.108: INFO: Number of nodes with available pods: 0 May 4 16:55:53.108: INFO: Node node1 is running more than one daemon pod May 4 16:55:54.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:54.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:54.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:54.105: INFO: Number of nodes with available pods: 0 May 4 16:55:54.106: INFO: Node node1 is running more than one daemon pod May 4 16:55:55.104: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:55.104: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:55.104: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:55.107: INFO: Number of nodes with available pods: 0 May 4 16:55:55.107: INFO: Node node1 is running more than one daemon pod May 4 16:55:56.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:56.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:56.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:56.104: INFO: Number of nodes with available pods: 0 May 4 16:55:56.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:57.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:57.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:57.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:57.104: INFO: Number of nodes with available pods: 0 May 4 16:55:57.104: INFO: Node node1 is running more than one daemon pod May 4 16:55:58.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:58.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:58.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:58.105: INFO: Number of nodes with available pods: 0 May 4 16:55:58.105: INFO: Node node1 is running more than one daemon pod May 4 16:55:59.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:59.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:59.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:55:59.105: INFO: Number of nodes with available pods: 0 May 4 16:55:59.105: INFO: Node node1 is running more than one daemon pod May 4 16:56:00.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:00.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:00.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:00.104: INFO: Number of nodes with available pods: 0 May 4 16:56:00.104: INFO: Node node1 is running more than one daemon pod May 4 16:56:01.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:01.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:01.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:01.104: INFO: Number of nodes with available pods: 0 May 4 16:56:01.104: INFO: Node node1 is running more than one daemon pod May 4 16:56:02.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:02.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:02.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:02.105: INFO: Number of nodes with available pods: 0 May 4 16:56:02.105: INFO: Node node1 is running more than one daemon pod May 4 16:56:03.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:03.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:03.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:03.106: INFO: Number of nodes with available pods: 0 May 4 16:56:03.106: INFO: Node node1 is running more than one daemon pod May 4 16:56:04.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:04.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:04.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:04.104: INFO: Number of nodes with available pods: 0 May 4 16:56:04.104: INFO: Node node1 is running more than one daemon pod May 4 16:56:05.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:05.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:05.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:05.104: INFO: Number of nodes with available pods: 0 May 4 16:56:05.104: INFO: Node node1 is running more than one daemon pod May 4 16:56:06.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:06.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:06.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:06.106: INFO: Number of nodes with available pods: 0 May 4 16:56:06.106: INFO: Node node1 is running more than one daemon pod May 4 16:56:07.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:07.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:07.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:07.105: INFO: Number of nodes with available pods: 0 May 4 16:56:07.105: INFO: Node node1 is running more than one daemon pod May 4 16:56:08.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:08.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:08.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:08.104: INFO: Number of nodes with available pods: 0 May 4 16:56:08.104: INFO: Node node1 is running more than one daemon pod May 4 16:56:09.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:09.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:09.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:09.105: INFO: Number of nodes with available pods: 0 May 4 16:56:09.105: INFO: Node node1 is running more than one daemon pod May 4 16:56:10.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:10.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:10.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:10.104: INFO: Number of nodes with available pods: 0 May 4 16:56:10.104: INFO: Node node1 is running more than one daemon pod May 4 16:56:11.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:11.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:11.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:11.106: INFO: Number of nodes with available pods: 0 May 4 16:56:11.106: INFO: Node node1 is running more than one daemon pod May 4 16:56:12.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:12.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:12.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:12.105: INFO: Number of nodes with available pods: 0 May 4 16:56:12.105: INFO: Node node1 is running more than one daemon pod May 4 16:56:13.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:13.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:13.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:13.105: INFO: Number of nodes with available pods: 0 May 4 16:56:13.105: INFO: Node node1 is running more than one daemon pod May 4 16:56:14.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:14.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:14.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:14.105: INFO: Number of nodes with available pods: 0 May 4 16:56:14.105: INFO: Node node1 is running more than one daemon pod May 4 16:56:15.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:15.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:15.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:15.104: INFO: Number of nodes with available pods: 0 May 4 16:56:15.104: INFO: Node node1 is running more than one daemon pod May 4 16:56:16.100: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:16.100: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:16.100: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:16.103: INFO: Number of nodes with available pods: 0 May 4 16:56:16.103: INFO: Node node1 is running more than one daemon pod May 4 16:56:17.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:17.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:17.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:17.104: INFO: Number of nodes with available pods: 0 May 4 16:56:17.104: INFO: Node node1 is running more than one daemon pod May 4 16:56:18.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:18.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:18.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:18.105: INFO: Number of nodes with available pods: 0 May 4 16:56:18.105: INFO: Node node1 is running more than one daemon pod May 4 16:56:19.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:19.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:19.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:19.106: INFO: Number of nodes with available pods: 0 May 4 16:56:19.106: INFO: Node node1 is running more than one daemon pod May 4 16:56:20.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:20.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:20.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:20.106: INFO: Number of nodes with available pods: 0 May 4 16:56:20.106: INFO: Node node1 is running more than one daemon pod May 4 16:56:21.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:21.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:21.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:21.105: INFO: Number of nodes with available pods: 0 May 4 16:56:21.105: INFO: Node node1 is running more than one daemon pod May 4 16:56:22.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:22.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:22.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:22.104: INFO: Number of nodes with available pods: 0 May 4 16:56:22.104: INFO: Node node1 is running more than one daemon pod May 4 16:56:23.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:23.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:23.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:23.104: INFO: Number of nodes with available pods: 0 May 4 16:56:23.104: INFO: Node node1 is running more than one daemon pod May 4 16:56:24.102: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:24.102: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:24.102: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:24.105: INFO: Number of nodes with available pods: 0 May 4 16:56:24.105: INFO: Node node1 is running more than one daemon pod May 4 16:56:25.103: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:25.103: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:25.103: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:25.105: INFO: Number of nodes with available pods: 0 May 4 16:56:25.105: INFO: Node node1 is running more than one daemon pod May 4 16:56:26.104: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:26.104: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:26.104: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:26.107: INFO: Number of nodes with available pods: 0 May 4 16:56:26.107: INFO: Node node1 is running more than one daemon pod May 4 16:56:27.101: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:27.101: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:27.101: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:27.104: INFO: Number of nodes with available pods: 0 May 4 16:56:27.104: INFO: Node node1 is running more than one daemon pod May 4 16:56:27.109: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:27.109: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:27.109: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:56:27.111: INFO: Number of nodes with available pods: 0 May 4 16:56:27.111: INFO: Node node1 is running more than one daemon pod May 4 16:56:27.112: FAIL: error waiting for daemon pod to start Unexpected error: <*errors.errorString | 0xc0002fe1f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func3.9() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:433 +0x6a7 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001949680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001949680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001949680, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2856, will wait for the garbage collector to delete the pods May 4 16:56:27.175: INFO: Deleting DaemonSet.extensions daemon-set took: 7.195105ms May 4 16:56:29.976: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.800213127s May 4 16:56:33.978: INFO: Number of nodes with available pods: 0 May 4 16:56:33.978: INFO: Number of running nodes: 0, number of available pods: 0 May 4 16:56:33.985: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2856/daemonsets","resourceVersion":"50738"},"items":null} May 4 16:56:33.988: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2856/pods","resourceVersion":"50738"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "daemonsets-2856". STEP: Found 18 events. May 4 16:56:34.004: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for daemon-set-6hltg: { } Scheduled: Successfully assigned daemonsets-2856/daemon-set-6hltg to node1 May 4 16:56:34.004: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for daemon-set-wrw2t: { } Scheduled: Successfully assigned daemonsets-2856/daemon-set-wrw2t to node2 May 4 16:56:34.005: INFO: At 2021-05-04 16:51:27 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-6hltg May 4 16:56:34.005: INFO: At 2021-05-04 16:51:27 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-wrw2t May 4 16:56:34.005: INFO: At 2021-05-04 16:51:28 +0000 UTC - event for daemon-set-6hltg: {kubelet node1} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:56:34.005: INFO: At 2021-05-04 16:51:28 +0000 UTC - event for daemon-set-6hltg: {multus } AddedInterface: Add eth0 [10.244.4.225/24] May 4 16:56:34.005: INFO: At 2021-05-04 16:51:29 +0000 UTC - event for daemon-set-6hltg: {kubelet node1} Failed: Error: ErrImagePull May 4 16:56:34.005: INFO: At 2021-05-04 16:51:29 +0000 UTC - event for daemon-set-6hltg: {kubelet node1} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:56:34.005: INFO: At 2021-05-04 16:51:29 +0000 UTC - event for daemon-set-wrw2t: {multus } AddedInterface: Add eth0 [10.244.3.25/24] May 4 16:56:34.005: INFO: At 2021-05-04 16:51:29 +0000 UTC - event for daemon-set-wrw2t: {kubelet node2} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:56:34.005: INFO: At 2021-05-04 16:51:30 +0000 UTC - event for daemon-set-6hltg: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:56:34.005: INFO: At 2021-05-04 16:51:30 +0000 UTC - event for daemon-set-6hltg: {kubelet node1} Failed: Error: ImagePullBackOff May 4 16:56:34.005: INFO: At 2021-05-04 16:51:30 +0000 UTC - event for daemon-set-wrw2t: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 4 16:56:34.005: INFO: At 2021-05-04 16:51:30 +0000 UTC - event for daemon-set-wrw2t: {kubelet node2} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 4 16:56:34.005: INFO: At 2021-05-04 16:51:30 +0000 UTC - event for daemon-set-wrw2t: {kubelet node2} Failed: Error: ErrImagePull May 4 16:56:34.005: INFO: At 2021-05-04 16:51:32 +0000 UTC - event for daemon-set-wrw2t: {multus } AddedInterface: Add eth0 [10.244.3.26/24] May 4 16:56:34.005: INFO: At 2021-05-04 16:51:32 +0000 UTC - event for daemon-set-wrw2t: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/httpd:2.4.38-alpine" May 4 16:56:34.005: INFO: At 2021-05-04 16:51:32 +0000 UTC - event for daemon-set-wrw2t: {kubelet node2} Failed: Error: ImagePullBackOff May 4 16:56:34.010: INFO: POD NODE PHASE GRACE CONDITIONS May 4 16:56:34.010: INFO: May 4 16:56:34.015: INFO: Logging node info for node master1 May 4 16:56:34.017: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 db982204-549e-4532-90a7-a4410878cfc9 50705 0 2021-05-04 14:43:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"3e:f0:43:cb:66:52"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-04 14:51:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:46 +0000 UTC,LastTransitionTime:2021-05-04 14:47:46 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:56:27 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:56:27 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:56:27 +0000 UTC,LastTransitionTime:2021-05-04 14:43:01 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:56:27 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:88a0771919594d4187f6704fc7592bf8,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8e0a253b-2aa4-4467-879e-567e7ba1ffa4,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:56:34.018: INFO: Logging kubelet events for node master1 May 4 16:56:34.022: INFO: Logging pods the kubelet thinks is on node master1 May 4 16:56:34.040: INFO: kube-flannel-qspzk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:56:34.040: INFO: Init container install-cni ready: true, restart count 0 May 4 16:56:34.040: INFO: Container kube-flannel ready: true, restart count 3 May 4 16:56:34.040: INFO: kube-multus-ds-amd64-jflvf started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.040: INFO: Container kube-multus ready: true, restart count 1 May 4 16:56:34.040: INFO: coredns-7677f9bb54-qvcd2 started at 2021-05-04 14:46:11 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.040: INFO: Container coredns ready: true, restart count 1 May 4 16:56:34.040: INFO: node-feature-discovery-controller-5bf5c49849-72rn6 started at 2021-05-04 14:51:52 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.040: INFO: Container nfd-controller ready: true, restart count 0 May 4 16:56:34.040: INFO: kube-apiserver-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.040: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:56:34.040: INFO: kube-controller-manager-master1 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.040: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:56:34.040: INFO: kube-proxy-8j6ch started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.040: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:56:34.040: INFO: docker-registry-docker-registry-56cbc7bc58-zhf8t started at 2021-05-04 14:48:42 +0000 UTC (0+2 container statuses recorded) May 4 16:56:34.040: INFO: Container docker-registry ready: true, restart count 0 May 4 16:56:34.040: INFO: Container nginx ready: true, restart count 0 May 4 16:56:34.040: INFO: node-exporter-jckjs started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:56:34.040: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:56:34.040: INFO: Container node-exporter ready: true, restart count 0 May 4 16:56:34.040: INFO: kube-scheduler-master1 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.040: INFO: Container kube-scheduler ready: true, restart count 0 W0504 16:56:34.051665 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:56:34.077: INFO: Latency metrics for node master1 May 4 16:56:34.077: INFO: Logging node info for node master2 May 4 16:56:34.079: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e2c15170-247b-4e7b-b818-abc807948bf8 50703 0 2021-05-04 14:43:40 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:e0:10:a0:e0:62"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:32 +0000 UTC,LastTransitionTime:2021-05-04 14:47:32 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:56:26 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:56:26 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:56:26 +0000 UTC,LastTransitionTime:2021-05-04 14:43:40 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:56:26 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6af568f56589422a9bd68e0270ce0f8c,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:bf27bb77-fad2-4b52-85c3-acb5113fc512,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:56:34.080: INFO: Logging kubelet events for node master2 May 4 16:56:34.082: INFO: Logging pods the kubelet thinks is on node master2 May 4 16:56:34.096: INFO: kube-proxy-6b5t8 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.096: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:56:34.096: INFO: kube-flannel-cxdfr started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:56:34.096: INFO: Init container install-cni ready: true, restart count 0 May 4 16:56:34.096: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:56:34.096: INFO: kube-multus-ds-amd64-dw8tg started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.096: INFO: Container kube-multus ready: true, restart count 1 May 4 16:56:34.096: INFO: dns-autoscaler-5b7b5c9b6f-zbrsq started at 2021-05-04 14:46:08 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.096: INFO: Container autoscaler ready: true, restart count 1 May 4 16:56:34.096: INFO: node-exporter-9c6qf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:56:34.096: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:56:34.096: INFO: Container node-exporter ready: true, restart count 0 May 4 16:56:34.096: INFO: kube-apiserver-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.096: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:56:34.096: INFO: kube-controller-manager-master2 started at 2021-05-04 14:47:26 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.096: INFO: Container kube-controller-manager ready: true, restart count 2 May 4 16:56:34.096: INFO: kube-scheduler-master2 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.096: INFO: Container kube-scheduler ready: true, restart count 2 W0504 16:56:34.110357 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:56:34.133: INFO: Latency metrics for node master2 May 4 16:56:34.133: INFO: Logging node info for node master3 May 4 16:56:34.136: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 b533a646-667e-403c-944b-71dec9cc4851 50701 0 2021-05-04 14:43:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"16:b0:53:14:f6:c9"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-04 14:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-04 14:43:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-04 14:45:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:48:50 +0000 UTC,LastTransitionTime:2021-05-04 14:48:50 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:56:26 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:56:26 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:56:26 +0000 UTC,LastTransitionTime:2021-05-04 14:43:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:56:26 +0000 UTC,LastTransitionTime:2021-05-04 14:45:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:959373dcda56494486f0c2bb0bb496cc,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:0714ca81-c21e-40d6-a288-48d597238e54,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:56:34.136: INFO: Logging kubelet events for node master3 May 4 16:56:34.139: INFO: Logging pods the kubelet thinks is on node master3 May 4 16:56:34.155: INFO: kube-scheduler-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.155: INFO: Container kube-scheduler ready: true, restart count 2 May 4 16:56:34.155: INFO: kube-proxy-2p5b6 started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.155: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:56:34.155: INFO: kube-flannel-wznt8 started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:56:34.155: INFO: Init container install-cni ready: true, restart count 0 May 4 16:56:34.155: INFO: Container kube-flannel ready: true, restart count 1 May 4 16:56:34.155: INFO: kube-multus-ds-amd64-cgwz2 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.155: INFO: Container kube-multus ready: true, restart count 1 May 4 16:56:34.155: INFO: coredns-7677f9bb54-pshfb started at 2021-05-04 14:46:06 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.155: INFO: Container coredns ready: true, restart count 1 May 4 16:56:34.155: INFO: node-exporter-wvppn started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:56:34.155: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:56:34.155: INFO: Container node-exporter ready: true, restart count 0 May 4 16:56:34.155: INFO: kube-apiserver-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.155: INFO: Container kube-apiserver ready: true, restart count 0 May 4 16:56:34.155: INFO: kube-controller-manager-master3 started at 2021-05-04 14:44:16 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.155: INFO: Container kube-controller-manager ready: true, restart count 2 W0504 16:56:34.167506 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:56:34.196: INFO: Latency metrics for node master3 May 4 16:56:34.196: INFO: Logging node info for node node1 May 4 16:56:34.199: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 b8cf7e16-d5c7-4e2c-996a-93d93bd4fa1c 50696 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"e2:50:df:03:d2:13"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:54:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:56:26 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:56:26 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:56:26 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:56:26 +0000 UTC,LastTransitionTime:2021-05-04 14:47:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bffc023a4ab84df0b0181bc7b8f509e2,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:dc08af72-abca-4f1d-bd0f-0e8d8eb97de5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[@ :],SizeBytes:1002569035,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:371dc6bf7e0c7ce112a29341b000c40d840aef1dbb4fdcb3ae5c0597e28f3061 golang:alpine3.12],SizeBytes:301097267,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:56:34.199: INFO: Logging kubelet events for node node1 May 4 16:56:34.201: INFO: Logging pods the kubelet thinks is on node node1 May 4 16:56:34.221: INFO: kube-multus-ds-amd64-pkmbz started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.221: INFO: Container kube-multus ready: true, restart count 1 May 4 16:56:34.221: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.221: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:56:34.221: INFO: cmk-slg76 started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:56:34.221: INFO: Container nodereport ready: true, restart count 0 May 4 16:56:34.221: INFO: Container reconcile ready: true, restart count 0 May 4 16:56:34.221: INFO: prometheus-k8s-0 started at 2021-05-04 14:56:12 +0000 UTC (0+5 container statuses recorded) May 4 16:56:34.221: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:56:34.221: INFO: Container grafana ready: true, restart count 0 May 4 16:56:34.221: INFO: Container prometheus ready: true, restart count 1 May 4 16:56:34.221: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:56:34.221: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:56:34.221: INFO: kube-flannel-d6pbl started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:56:34.221: INFO: Init container install-cni ready: true, restart count 2 May 4 16:56:34.221: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:56:34.221: INFO: node-feature-discovery-worker-wfgl5 started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.221: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:56:34.221: INFO: prometheus-operator-5bb8cb9d8f-rrrhf started at 2021-05-04 14:56:03 +0000 UTC (0+2 container statuses recorded) May 4 16:56:34.221: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:56:34.221: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:56:34.222: INFO: node-exporter-k8qd9 started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:56:34.222: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:56:34.222: INFO: Container node-exporter ready: true, restart count 0 May 4 16:56:34.222: INFO: collectd-4755t started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:56:34.222: INFO: Container collectd ready: true, restart count 0 May 4 16:56:34.222: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:56:34.222: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:56:34.222: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.222: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:56:34.222: INFO: nginx-proxy-node1 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.222: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:56:34.222: INFO: kube-proxy-t2mbn started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.222: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:56:34.222: INFO: liveness-http started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.222: INFO: Container liveness-http ready: false, restart count 29 May 4 16:56:34.222: INFO: cmk-init-discover-node1-m8vvw started at 2021-05-04 14:54:32 +0000 UTC (0+3 container statuses recorded) May 4 16:56:34.222: INFO: Container discover ready: false, restart count 0 May 4 16:56:34.222: INFO: Container init ready: false, restart count 0 May 4 16:56:34.222: INFO: Container install ready: false, restart count 0 W0504 16:56:34.233135 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:56:34.278: INFO: Latency metrics for node node1 May 4 16:56:34.278: INFO: Logging node info for node node2 May 4 16:56:34.281: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 41567fa7-bb24-4381-9387-e4115195037d 50729 0 2021-05-04 14:44:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"32:98:f5:3b:98:5c"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-04 14:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-05-04 14:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-04 14:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-04 14:52:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-04 14:54:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-05-04 14:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-04 14:47:29 +0000 UTC,LastTransitionTime:2021-05-04 14:47:29 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-04 16:56:31 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-04 16:56:31 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-04 16:56:31 +0000 UTC,LastTransitionTime:2021-05-04 14:44:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-04 16:56:31 +0000 UTC,LastTransitionTime:2021-05-04 14:45:42 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d47a1c7ab17f44f2ae7ff788700a8d74,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:df3523a1-a74f-4f8b-beb1-29f5ed8699f3,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:511d36b62d758304a5adb26b5996ed37211ab226beb7de4e67994cbecb0279a7 localhost:30500/barometer-collectd:stable],SizeBytes:1464048999,},ContainerImage{Names:[localhost:30500/cmk@sha256:f417461c5e0283b5f2ba8e34dc073a15fe1f9ff6b542330c536c86aa72f7141f localhost:30500/cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726615179,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[nginx@sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412 nginx:1.19],SizeBytes:133117205,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:25502c57570a2143842478595be9c2a2a3cba2df60b673aef79d6ca80e3eac06 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44395488,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:c2fabacbc4e42f3db70f9508e00158b1dce4cf96d91cabaa2eca24e5a0900b66 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:e074a505e2d62b5119460ab724b2e1df10c8419ef2457f9ce9f3a0f75be3e959 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 4 16:56:34.282: INFO: Logging kubelet events for node node2 May 4 16:56:34.284: INFO: Logging pods the kubelet thinks is on node node2 May 4 16:56:34.306: INFO: kube-multus-ds-amd64-7r2s4 started at 2021-05-04 14:45:46 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.306: INFO: Container kube-multus ready: true, restart count 1 May 4 16:56:34.306: INFO: cmk-init-discover-node2-zlxzj started at 2021-05-04 14:54:52 +0000 UTC (0+3 container statuses recorded) May 4 16:56:34.306: INFO: Container discover ready: false, restart count 0 May 4 16:56:34.306: INFO: Container init ready: false, restart count 0 May 4 16:56:34.306: INFO: Container install ready: false, restart count 0 May 4 16:56:34.306: INFO: collectd-dhwfp started at 2021-05-04 15:01:51 +0000 UTC (0+3 container statuses recorded) May 4 16:56:34.306: INFO: Container collectd ready: true, restart count 0 May 4 16:56:34.306: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:56:34.306: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:56:34.306: INFO: nginx-proxy-node2 started at 2021-05-04 14:51:11 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.306: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:56:34.306: INFO: cmk-2fmbx started at 2021-05-04 14:55:14 +0000 UTC (0+2 container statuses recorded) May 4 16:56:34.306: INFO: Container nodereport ready: true, restart count 0 May 4 16:56:34.306: INFO: Container reconcile ready: true, restart count 0 May 4 16:56:34.306: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb started at 2021-05-04 14:46:10 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.306: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:56:34.306: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 started at 2021-05-04 14:52:50 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.306: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:56:34.306: INFO: kube-flannel-lnwkk started at 2021-05-04 14:45:37 +0000 UTC (1+1 container statuses recorded) May 4 16:56:34.306: INFO: Init container install-cni ready: true, restart count 2 May 4 16:56:34.306: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:56:34.306: INFO: cmk-webhook-6c9d5f8578-fr595 started at 2021-05-04 14:55:15 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.306: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:56:34.306: INFO: node-exporter-5lghf started at 2021-05-04 14:56:10 +0000 UTC (0+2 container statuses recorded) May 4 16:56:34.306: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:56:34.306: INFO: Container node-exporter ready: true, restart count 0 May 4 16:56:34.306: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x started at 2021-05-04 14:59:02 +0000 UTC (0+2 container statuses recorded) May 4 16:56:34.306: INFO: Container tas-controller ready: true, restart count 0 May 4 16:56:34.306: INFO: Container tas-extender ready: true, restart count 0 May 4 16:56:34.306: INFO: liveness-exec started at 2021-05-04 15:33:56 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.306: INFO: Container liveness-exec ready: false, restart count 6 May 4 16:56:34.306: INFO: kube-proxy-rfjjf started at 2021-05-04 14:45:01 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.306: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:56:34.306: INFO: node-feature-discovery-worker-jzjqs started at 2021-05-04 14:51:40 +0000 UTC (0+1 container statuses recorded) May 4 16:56:34.306: INFO: Container nfd-worker ready: true, restart count 0 W0504 16:56:34.319304 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 4 16:56:34.350: INFO: Latency metrics for node node2 May 4 16:56:34.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2856" for this suite. • Failure [307.317 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:56:27.112: error waiting for daemon pod to start Unexpected error: <*errors.errorString | 0xc0002fe1f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:433 ------------------------------ {"msg":"FAILED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":3,"skipped":853,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:56:34.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:56:34.394: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 4 16:56:34.407: INFO: Number of nodes with available pods: 0 May 4 16:56:34.407: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 4 16:56:34.428: INFO: Number of nodes with available pods: 0 May 4 16:56:34.428: INFO: Node node1 is running more than one daemon pod May 4 16:56:35.431: INFO: Number of nodes with available pods: 0 May 4 16:56:35.431: INFO: Node node1 is running more than one daemon pod May 4 16:56:36.432: INFO: Number of nodes with available pods: 0 May 4 16:56:36.432: INFO: Node node1 is running more than one daemon pod May 4 16:56:37.432: INFO: Number of nodes with available pods: 0 May 4 16:56:37.432: INFO: Node node1 is running more than one daemon pod May 4 16:56:38.432: INFO: Number of nodes with available pods: 1 May 4 16:56:38.432: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 4 16:56:38.448: INFO: Number of nodes with available pods: 1 May 4 16:56:38.448: INFO: Number of running nodes: 0, number of available pods: 1 May 4 16:56:39.451: INFO: Number of nodes with available pods: 0 May 4 16:56:39.451: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 4 16:56:39.461: INFO: Number of nodes with available pods: 0 May 4 16:56:39.461: INFO: Node node1 is running more than one daemon pod May 4 16:56:40.465: INFO: Number of nodes with available pods: 0 May 4 16:56:40.466: INFO: Node node1 is running more than one daemon pod May 4 16:56:41.465: INFO: Number of nodes with available pods: 0 May 4 16:56:41.465: INFO: Node node1 is running more than one daemon pod May 4 16:56:42.467: INFO: Number of nodes with available pods: 0 May 4 16:56:42.467: INFO: Node node1 is running more than one daemon pod May 4 16:56:43.468: INFO: Number of nodes with available pods: 0 May 4 16:56:43.468: INFO: Node node1 is running more than one daemon pod May 4 16:56:44.466: INFO: Number of nodes with available pods: 0 May 4 16:56:44.466: INFO: Node node1 is running more than one daemon pod May 4 16:56:45.465: INFO: Number of nodes with available pods: 1 May 4 16:56:45.465: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2102, will wait for the garbage collector to delete the pods May 4 16:56:45.528: INFO: Deleting DaemonSet.extensions daemon-set took: 6.238797ms May 4 16:56:45.628: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.367011ms May 4 16:56:59.931: INFO: Number of nodes with available pods: 0 May 4 16:56:59.931: INFO: Number of running nodes: 0, number of available pods: 0 May 4 16:56:59.934: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2102/daemonsets","resourceVersion":"50905"},"items":null} May 4 16:56:59.937: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2102/pods","resourceVersion":"50905"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:56:59.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2102" for this suite. • [SLOW TEST:25.600 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":4,"skipped":1167,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:56:59.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 4 16:56:59.987: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 4 16:56:59.995: INFO: Waiting for terminating namespaces to be deleted... May 4 16:56:59.997: INFO: Logging pods the apiserver thinks is on node node1 before test May 4 16:57:00.016: INFO: liveness-http from examples-6137 started at 2021-05-04 15:33:56 +0000 UTC (1 container statuses recorded) May 4 16:57:00.016: INFO: Container liveness-http ready: false, restart count 29 May 4 16:57:00.016: INFO: cmk-init-discover-node1-m8vvw from kube-system started at 2021-05-04 14:54:32 +0000 UTC (3 container statuses recorded) May 4 16:57:00.016: INFO: Container discover ready: false, restart count 0 May 4 16:57:00.016: INFO: Container init ready: false, restart count 0 May 4 16:57:00.016: INFO: Container install ready: false, restart count 0 May 4 16:57:00.016: INFO: cmk-slg76 from kube-system started at 2021-05-04 14:55:14 +0000 UTC (2 container statuses recorded) May 4 16:57:00.016: INFO: Container nodereport ready: true, restart count 0 May 4 16:57:00.016: INFO: Container reconcile ready: true, restart count 0 May 4 16:57:00.016: INFO: kube-flannel-d6pbl from kube-system started at 2021-05-04 14:45:37 +0000 UTC (1 container statuses recorded) May 4 16:57:00.016: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:57:00.016: INFO: kube-multus-ds-amd64-pkmbz from kube-system started at 2021-05-04 14:45:46 +0000 UTC (1 container statuses recorded) May 4 16:57:00.016: INFO: Container kube-multus ready: true, restart count 1 May 4 16:57:00.016: INFO: kube-proxy-t2mbn from kube-system started at 2021-05-04 14:45:01 +0000 UTC (1 container statuses recorded) May 4 16:57:00.016: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:57:00.016: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq from kube-system started at 2021-05-04 14:46:10 +0000 UTC (1 container statuses recorded) May 4 16:57:00.016: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:57:00.016: INFO: nginx-proxy-node1 from kube-system started at 2021-05-04 14:51:11 +0000 UTC (1 container statuses recorded) May 4 16:57:00.016: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:57:00.016: INFO: node-feature-discovery-worker-wfgl5 from kube-system started at 2021-05-04 14:51:40 +0000 UTC (1 container statuses recorded) May 4 16:57:00.016: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:57:00.016: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt from kube-system started at 2021-05-04 14:52:50 +0000 UTC (1 container statuses recorded) May 4 16:57:00.016: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:57:00.016: INFO: collectd-4755t from monitoring started at 2021-05-04 15:01:51 +0000 UTC (3 container statuses recorded) May 4 16:57:00.016: INFO: Container collectd ready: true, restart count 0 May 4 16:57:00.016: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:57:00.016: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:57:00.016: INFO: node-exporter-k8qd9 from monitoring started at 2021-05-04 14:56:10 +0000 UTC (2 container statuses recorded) May 4 16:57:00.016: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:57:00.016: INFO: Container node-exporter ready: true, restart count 0 May 4 16:57:00.016: INFO: prometheus-k8s-0 from monitoring started at 2021-05-04 14:56:12 +0000 UTC (5 container statuses recorded) May 4 16:57:00.016: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:57:00.016: INFO: Container grafana ready: true, restart count 0 May 4 16:57:00.016: INFO: Container prometheus ready: true, restart count 1 May 4 16:57:00.016: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:57:00.016: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:57:00.016: INFO: prometheus-operator-5bb8cb9d8f-rrrhf from monitoring started at 2021-05-04 14:56:03 +0000 UTC (2 container statuses recorded) May 4 16:57:00.016: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:57:00.016: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:57:00.016: INFO: Logging pods the apiserver thinks is on node node2 before test May 4 16:57:00.030: INFO: liveness-exec from examples-6137 started at 2021-05-04 15:33:56 +0000 UTC (1 container statuses recorded) May 4 16:57:00.030: INFO: Container liveness-exec ready: true, restart count 7 May 4 16:57:00.030: INFO: cmk-2fmbx from kube-system started at 2021-05-04 14:55:14 +0000 UTC (2 container statuses recorded) May 4 16:57:00.030: INFO: Container nodereport ready: true, restart count 0 May 4 16:57:00.030: INFO: Container reconcile ready: true, restart count 0 May 4 16:57:00.030: INFO: cmk-init-discover-node2-zlxzj from kube-system started at 2021-05-04 14:54:52 +0000 UTC (3 container statuses recorded) May 4 16:57:00.030: INFO: Container discover ready: false, restart count 0 May 4 16:57:00.030: INFO: Container init ready: false, restart count 0 May 4 16:57:00.030: INFO: Container install ready: false, restart count 0 May 4 16:57:00.030: INFO: cmk-webhook-6c9d5f8578-fr595 from kube-system started at 2021-05-04 14:55:15 +0000 UTC (1 container statuses recorded) May 4 16:57:00.030: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:57:00.030: INFO: kube-flannel-lnwkk from kube-system started at 2021-05-04 14:45:37 +0000 UTC (1 container statuses recorded) May 4 16:57:00.030: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:57:00.030: INFO: kube-multus-ds-amd64-7r2s4 from kube-system started at 2021-05-04 14:45:46 +0000 UTC (1 container statuses recorded) May 4 16:57:00.030: INFO: Container kube-multus ready: true, restart count 1 May 4 16:57:00.030: INFO: kube-proxy-rfjjf from kube-system started at 2021-05-04 14:45:01 +0000 UTC (1 container statuses recorded) May 4 16:57:00.030: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:57:00.030: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb from kube-system started at 2021-05-04 14:46:10 +0000 UTC (1 container statuses recorded) May 4 16:57:00.030: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:57:00.030: INFO: nginx-proxy-node2 from kube-system started at 2021-05-04 14:51:11 +0000 UTC (1 container statuses recorded) May 4 16:57:00.030: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:57:00.030: INFO: node-feature-discovery-worker-jzjqs from kube-system started at 2021-05-04 14:51:40 +0000 UTC (1 container statuses recorded) May 4 16:57:00.030: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:57:00.030: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 from kube-system started at 2021-05-04 14:52:50 +0000 UTC (1 container statuses recorded) May 4 16:57:00.030: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:57:00.030: INFO: collectd-dhwfp from monitoring started at 2021-05-04 15:01:51 +0000 UTC (3 container statuses recorded) May 4 16:57:00.030: INFO: Container collectd ready: true, restart count 0 May 4 16:57:00.030: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:57:00.030: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:57:00.030: INFO: node-exporter-5lghf from monitoring started at 2021-05-04 14:56:10 +0000 UTC (2 container statuses recorded) May 4 16:57:00.031: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:57:00.031: INFO: Container node-exporter ready: true, restart count 0 May 4 16:57:00.031: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x from monitoring started at 2021-05-04 14:59:02 +0000 UTC (2 container statuses recorded) May 4 16:57:00.031: INFO: Container tas-controller ready: true, restart count 0 May 4 16:57:00.031: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6bfb7a9b-d55d-46c8-ac41-00a203a932b5 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-6bfb7a9b-d55d-46c8-ac41-00a203a932b5 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-6bfb7a9b-d55d-46c8-ac41-00a203a932b5 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:57:08.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9056" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.148 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":5,"skipped":1364,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:57:08.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:57:08.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3272" for this suite. STEP: Destroying namespace "nspatchtest-5b118a99-22fb-4666-a743-9e479485146d-9038" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":6,"skipped":1448,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:57:08.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 16:57:08.217: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 4 16:57:08.227: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:08.227: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:08.227: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:08.230: INFO: Number of nodes with available pods: 0 May 4 16:57:08.230: INFO: Node node1 is running more than one daemon pod May 4 16:57:09.234: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:09.234: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:09.234: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:09.236: INFO: Number of nodes with available pods: 0 May 4 16:57:09.236: INFO: Node node1 is running more than one daemon pod May 4 16:57:10.236: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:10.236: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:10.236: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:10.238: INFO: Number of nodes with available pods: 0 May 4 16:57:10.238: INFO: Node node1 is running more than one daemon pod May 4 16:57:11.236: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:11.236: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:11.236: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:11.239: INFO: Number of nodes with available pods: 0 May 4 16:57:11.239: INFO: Node node1 is running more than one daemon pod May 4 16:57:12.236: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:12.236: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:12.236: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:12.238: INFO: Number of nodes with available pods: 1 May 4 16:57:12.238: INFO: Node node1 is running more than one daemon pod May 4 16:57:13.235: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:13.235: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:13.235: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:13.237: INFO: Number of nodes with available pods: 2 May 4 16:57:13.237: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 4 16:57:13.265: INFO: Wrong image for pod: daemon-set-2kmp6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:13.265: INFO: Wrong image for pod: daemon-set-hg7q2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:13.274: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:13.274: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:13.274: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:14.278: INFO: Wrong image for pod: daemon-set-2kmp6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:14.278: INFO: Wrong image for pod: daemon-set-hg7q2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:14.282: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:14.282: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:14.282: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:15.279: INFO: Wrong image for pod: daemon-set-2kmp6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:15.279: INFO: Wrong image for pod: daemon-set-hg7q2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:15.284: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:15.284: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:15.284: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:16.279: INFO: Wrong image for pod: daemon-set-2kmp6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:16.279: INFO: Pod daemon-set-2kmp6 is not available May 4 16:57:16.279: INFO: Wrong image for pod: daemon-set-hg7q2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:16.283: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:16.283: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:16.283: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:17.277: INFO: Wrong image for pod: daemon-set-2kmp6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:17.277: INFO: Pod daemon-set-2kmp6 is not available May 4 16:57:17.277: INFO: Wrong image for pod: daemon-set-hg7q2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:17.281: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:17.281: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:17.281: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:18.278: INFO: Wrong image for pod: daemon-set-2kmp6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:18.278: INFO: Pod daemon-set-2kmp6 is not available May 4 16:57:18.278: INFO: Wrong image for pod: daemon-set-hg7q2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:18.282: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:18.282: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:18.283: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:19.280: INFO: Wrong image for pod: daemon-set-2kmp6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:19.280: INFO: Pod daemon-set-2kmp6 is not available May 4 16:57:19.280: INFO: Wrong image for pod: daemon-set-hg7q2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:19.284: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:19.284: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:19.284: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:20.279: INFO: Pod daemon-set-gn5s9 is not available May 4 16:57:20.279: INFO: Wrong image for pod: daemon-set-hg7q2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:20.284: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:20.284: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:20.284: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:21.278: INFO: Pod daemon-set-gn5s9 is not available May 4 16:57:21.278: INFO: Wrong image for pod: daemon-set-hg7q2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:21.282: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:21.282: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:21.282: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:22.278: INFO: Pod daemon-set-gn5s9 is not available May 4 16:57:22.278: INFO: Wrong image for pod: daemon-set-hg7q2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:22.281: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:22.281: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:22.281: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:23.278: INFO: Wrong image for pod: daemon-set-hg7q2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:23.282: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:23.282: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:23.282: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:24.278: INFO: Wrong image for pod: daemon-set-hg7q2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 4 16:57:24.278: INFO: Pod daemon-set-hg7q2 is not available May 4 16:57:24.281: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:24.281: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:24.282: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:25.278: INFO: Pod daemon-set-vsdgl is not available May 4 16:57:25.281: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:25.282: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:25.282: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 4 16:57:25.286: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:25.286: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:25.286: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:25.289: INFO: Number of nodes with available pods: 1 May 4 16:57:25.289: INFO: Node node2 is running more than one daemon pod May 4 16:57:26.296: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:26.296: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:26.296: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:26.299: INFO: Number of nodes with available pods: 1 May 4 16:57:26.299: INFO: Node node2 is running more than one daemon pod May 4 16:57:27.296: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:27.296: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:27.296: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:27.298: INFO: Number of nodes with available pods: 1 May 4 16:57:27.298: INFO: Node node2 is running more than one daemon pod May 4 16:57:28.296: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:28.296: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:28.296: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:28.299: INFO: Number of nodes with available pods: 2 May 4 16:57:28.299: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4931, will wait for the garbage collector to delete the pods May 4 16:57:28.371: INFO: Deleting DaemonSet.extensions daemon-set took: 4.813323ms May 4 16:57:29.071: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.503528ms May 4 16:57:39.974: INFO: Number of nodes with available pods: 0 May 4 16:57:39.974: INFO: Number of running nodes: 0, number of available pods: 0 May 4 16:57:39.976: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4931/daemonsets","resourceVersion":"51239"},"items":null} May 4 16:57:39.979: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4931/pods","resourceVersion":"51239"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:57:39.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4931" for this suite. • [SLOW TEST:31.812 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":7,"skipped":2118,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:57:39.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 4 16:57:40.019: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 4 16:57:40.026: INFO: Waiting for terminating namespaces to be deleted... May 4 16:57:40.029: INFO: Logging pods the apiserver thinks is on node node1 before test May 4 16:57:40.038: INFO: liveness-http from examples-6137 started at 2021-05-04 15:33:56 +0000 UTC (1 container statuses recorded) May 4 16:57:40.038: INFO: Container liveness-http ready: false, restart count 29 May 4 16:57:40.038: INFO: cmk-init-discover-node1-m8vvw from kube-system started at 2021-05-04 14:54:32 +0000 UTC (3 container statuses recorded) May 4 16:57:40.038: INFO: Container discover ready: false, restart count 0 May 4 16:57:40.038: INFO: Container init ready: false, restart count 0 May 4 16:57:40.038: INFO: Container install ready: false, restart count 0 May 4 16:57:40.038: INFO: cmk-slg76 from kube-system started at 2021-05-04 14:55:14 +0000 UTC (2 container statuses recorded) May 4 16:57:40.038: INFO: Container nodereport ready: true, restart count 0 May 4 16:57:40.038: INFO: Container reconcile ready: true, restart count 0 May 4 16:57:40.038: INFO: kube-flannel-d6pbl from kube-system started at 2021-05-04 14:45:37 +0000 UTC (1 container statuses recorded) May 4 16:57:40.038: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:57:40.038: INFO: kube-multus-ds-amd64-pkmbz from kube-system started at 2021-05-04 14:45:46 +0000 UTC (1 container statuses recorded) May 4 16:57:40.038: INFO: Container kube-multus ready: true, restart count 1 May 4 16:57:40.038: INFO: kube-proxy-t2mbn from kube-system started at 2021-05-04 14:45:01 +0000 UTC (1 container statuses recorded) May 4 16:57:40.038: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:57:40.038: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq from kube-system started at 2021-05-04 14:46:10 +0000 UTC (1 container statuses recorded) May 4 16:57:40.038: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:57:40.038: INFO: nginx-proxy-node1 from kube-system started at 2021-05-04 14:51:11 +0000 UTC (1 container statuses recorded) May 4 16:57:40.038: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:57:40.038: INFO: node-feature-discovery-worker-wfgl5 from kube-system started at 2021-05-04 14:51:40 +0000 UTC (1 container statuses recorded) May 4 16:57:40.038: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:57:40.038: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt from kube-system started at 2021-05-04 14:52:50 +0000 UTC (1 container statuses recorded) May 4 16:57:40.038: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:57:40.038: INFO: collectd-4755t from monitoring started at 2021-05-04 15:01:51 +0000 UTC (3 container statuses recorded) May 4 16:57:40.038: INFO: Container collectd ready: true, restart count 0 May 4 16:57:40.039: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:57:40.039: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:57:40.039: INFO: node-exporter-k8qd9 from monitoring started at 2021-05-04 14:56:10 +0000 UTC (2 container statuses recorded) May 4 16:57:40.039: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:57:40.039: INFO: Container node-exporter ready: true, restart count 0 May 4 16:57:40.039: INFO: prometheus-k8s-0 from monitoring started at 2021-05-04 14:56:12 +0000 UTC (5 container statuses recorded) May 4 16:57:40.039: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:57:40.039: INFO: Container grafana ready: true, restart count 0 May 4 16:57:40.039: INFO: Container prometheus ready: true, restart count 1 May 4 16:57:40.039: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:57:40.039: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:57:40.039: INFO: prometheus-operator-5bb8cb9d8f-rrrhf from monitoring started at 2021-05-04 14:56:03 +0000 UTC (2 container statuses recorded) May 4 16:57:40.039: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:57:40.039: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:57:40.039: INFO: Logging pods the apiserver thinks is on node node2 before test May 4 16:57:40.045: INFO: liveness-exec from examples-6137 started at 2021-05-04 15:33:56 +0000 UTC (1 container statuses recorded) May 4 16:57:40.045: INFO: Container liveness-exec ready: true, restart count 7 May 4 16:57:40.045: INFO: cmk-2fmbx from kube-system started at 2021-05-04 14:55:14 +0000 UTC (2 container statuses recorded) May 4 16:57:40.045: INFO: Container nodereport ready: true, restart count 0 May 4 16:57:40.045: INFO: Container reconcile ready: true, restart count 0 May 4 16:57:40.045: INFO: cmk-init-discover-node2-zlxzj from kube-system started at 2021-05-04 14:54:52 +0000 UTC (3 container statuses recorded) May 4 16:57:40.045: INFO: Container discover ready: false, restart count 0 May 4 16:57:40.045: INFO: Container init ready: false, restart count 0 May 4 16:57:40.045: INFO: Container install ready: false, restart count 0 May 4 16:57:40.045: INFO: cmk-webhook-6c9d5f8578-fr595 from kube-system started at 2021-05-04 14:55:15 +0000 UTC (1 container statuses recorded) May 4 16:57:40.045: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:57:40.045: INFO: kube-flannel-lnwkk from kube-system started at 2021-05-04 14:45:37 +0000 UTC (1 container statuses recorded) May 4 16:57:40.045: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:57:40.045: INFO: kube-multus-ds-amd64-7r2s4 from kube-system started at 2021-05-04 14:45:46 +0000 UTC (1 container statuses recorded) May 4 16:57:40.045: INFO: Container kube-multus ready: true, restart count 1 May 4 16:57:40.045: INFO: kube-proxy-rfjjf from kube-system started at 2021-05-04 14:45:01 +0000 UTC (1 container statuses recorded) May 4 16:57:40.046: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:57:40.046: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb from kube-system started at 2021-05-04 14:46:10 +0000 UTC (1 container statuses recorded) May 4 16:57:40.046: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:57:40.046: INFO: nginx-proxy-node2 from kube-system started at 2021-05-04 14:51:11 +0000 UTC (1 container statuses recorded) May 4 16:57:40.046: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:57:40.046: INFO: node-feature-discovery-worker-jzjqs from kube-system started at 2021-05-04 14:51:40 +0000 UTC (1 container statuses recorded) May 4 16:57:40.046: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:57:40.046: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 from kube-system started at 2021-05-04 14:52:50 +0000 UTC (1 container statuses recorded) May 4 16:57:40.046: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:57:40.046: INFO: collectd-dhwfp from monitoring started at 2021-05-04 15:01:51 +0000 UTC (3 container statuses recorded) May 4 16:57:40.046: INFO: Container collectd ready: true, restart count 0 May 4 16:57:40.046: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:57:40.046: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:57:40.046: INFO: node-exporter-5lghf from monitoring started at 2021-05-04 14:56:10 +0000 UTC (2 container statuses recorded) May 4 16:57:40.046: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:57:40.046: INFO: Container node-exporter ready: true, restart count 0 May 4 16:57:40.046: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x from monitoring started at 2021-05-04 14:59:02 +0000 UTC (2 container statuses recorded) May 4 16:57:40.046: INFO: Container tas-controller ready: true, restart count 0 May 4 16:57:40.046: INFO: Container tas-extender ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-fa6e51db-2c82-4d13-b894-c450b5da4a4f 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-fa6e51db-2c82-4d13-b894-c450b5da4a4f off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-fa6e51db-2c82-4d13-b894-c450b5da4a4f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:57:56.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1592" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.160 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":17,"completed":8,"skipped":2250,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:57:56.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 4 16:57:56.225: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:56.225: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:56.225: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:56.232: INFO: Number of nodes with available pods: 0 May 4 16:57:56.232: INFO: Node node1 is running more than one daemon pod May 4 16:57:57.236: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:57.236: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:57.236: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:57.239: INFO: Number of nodes with available pods: 0 May 4 16:57:57.239: INFO: Node node1 is running more than one daemon pod May 4 16:57:58.237: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:58.237: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:58.238: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:58.240: INFO: Number of nodes with available pods: 0 May 4 16:57:58.240: INFO: Node node1 is running more than one daemon pod May 4 16:57:59.238: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:59.238: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:59.238: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:57:59.241: INFO: Number of nodes with available pods: 0 May 4 16:57:59.241: INFO: Node node1 is running more than one daemon pod May 4 16:58:00.237: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:00.237: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:00.237: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:00.240: INFO: Number of nodes with available pods: 1 May 4 16:58:00.240: INFO: Node node2 is running more than one daemon pod May 4 16:58:01.236: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:01.236: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:01.236: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:01.238: INFO: Number of nodes with available pods: 2 May 4 16:58:01.238: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 4 16:58:01.251: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:01.252: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:01.252: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:01.254: INFO: Number of nodes with available pods: 1 May 4 16:58:01.254: INFO: Node node1 is running more than one daemon pod May 4 16:58:02.259: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:02.260: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:02.260: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:02.262: INFO: Number of nodes with available pods: 1 May 4 16:58:02.262: INFO: Node node1 is running more than one daemon pod May 4 16:58:03.259: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:03.259: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:03.259: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:03.261: INFO: Number of nodes with available pods: 1 May 4 16:58:03.261: INFO: Node node1 is running more than one daemon pod May 4 16:58:04.261: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:04.261: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:04.261: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:04.263: INFO: Number of nodes with available pods: 1 May 4 16:58:04.263: INFO: Node node1 is running more than one daemon pod May 4 16:58:05.260: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:05.260: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:05.260: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:05.263: INFO: Number of nodes with available pods: 1 May 4 16:58:05.263: INFO: Node node1 is running more than one daemon pod May 4 16:58:06.259: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:06.259: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:06.259: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:06.262: INFO: Number of nodes with available pods: 1 May 4 16:58:06.262: INFO: Node node1 is running more than one daemon pod May 4 16:58:07.262: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:07.262: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:07.262: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:07.266: INFO: Number of nodes with available pods: 1 May 4 16:58:07.266: INFO: Node node1 is running more than one daemon pod May 4 16:58:08.259: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:08.259: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:08.259: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:08.262: INFO: Number of nodes with available pods: 1 May 4 16:58:08.262: INFO: Node node1 is running more than one daemon pod May 4 16:58:09.259: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:09.260: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:09.260: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:09.262: INFO: Number of nodes with available pods: 1 May 4 16:58:09.262: INFO: Node node1 is running more than one daemon pod May 4 16:58:10.259: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:10.259: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:10.259: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:10.262: INFO: Number of nodes with available pods: 1 May 4 16:58:10.262: INFO: Node node1 is running more than one daemon pod May 4 16:58:11.258: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:11.258: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:11.258: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:11.261: INFO: Number of nodes with available pods: 1 May 4 16:58:11.261: INFO: Node node1 is running more than one daemon pod May 4 16:58:12.261: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:12.261: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:12.261: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:12.263: INFO: Number of nodes with available pods: 1 May 4 16:58:12.263: INFO: Node node1 is running more than one daemon pod May 4 16:58:13.262: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:13.262: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:13.262: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:13.265: INFO: Number of nodes with available pods: 1 May 4 16:58:13.265: INFO: Node node1 is running more than one daemon pod May 4 16:58:14.258: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:14.258: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:14.258: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 16:58:14.261: INFO: Number of nodes with available pods: 2 May 4 16:58:14.261: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6238, will wait for the garbage collector to delete the pods May 4 16:58:14.320: INFO: Deleting DaemonSet.extensions daemon-set took: 4.221582ms May 4 16:58:15.020: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.422528ms May 4 16:58:20.023: INFO: Number of nodes with available pods: 0 May 4 16:58:20.023: INFO: Number of running nodes: 0, number of available pods: 0 May 4 16:58:20.025: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6238/daemonsets","resourceVersion":"51580"},"items":null} May 4 16:58:20.027: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6238/pods","resourceVersion":"51580"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:58:20.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6238" for this suite. • [SLOW TEST:23.876 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":9,"skipped":3252,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:58:20.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 4 16:58:20.071: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 4 16:58:20.079: INFO: Waiting for terminating namespaces to be deleted... May 4 16:58:20.082: INFO: Logging pods the apiserver thinks is on node node1 before test May 4 16:58:20.093: INFO: liveness-http from examples-6137 started at 2021-05-04 15:33:56 +0000 UTC (1 container statuses recorded) May 4 16:58:20.093: INFO: Container liveness-http ready: false, restart count 29 May 4 16:58:20.093: INFO: cmk-init-discover-node1-m8vvw from kube-system started at 2021-05-04 14:54:32 +0000 UTC (3 container statuses recorded) May 4 16:58:20.093: INFO: Container discover ready: false, restart count 0 May 4 16:58:20.093: INFO: Container init ready: false, restart count 0 May 4 16:58:20.093: INFO: Container install ready: false, restart count 0 May 4 16:58:20.093: INFO: cmk-slg76 from kube-system started at 2021-05-04 14:55:14 +0000 UTC (2 container statuses recorded) May 4 16:58:20.093: INFO: Container nodereport ready: true, restart count 0 May 4 16:58:20.093: INFO: Container reconcile ready: true, restart count 0 May 4 16:58:20.093: INFO: kube-flannel-d6pbl from kube-system started at 2021-05-04 14:45:37 +0000 UTC (1 container statuses recorded) May 4 16:58:20.093: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:58:20.093: INFO: kube-multus-ds-amd64-pkmbz from kube-system started at 2021-05-04 14:45:46 +0000 UTC (1 container statuses recorded) May 4 16:58:20.093: INFO: Container kube-multus ready: true, restart count 1 May 4 16:58:20.093: INFO: kube-proxy-t2mbn from kube-system started at 2021-05-04 14:45:01 +0000 UTC (1 container statuses recorded) May 4 16:58:20.094: INFO: Container kube-proxy ready: true, restart count 1 May 4 16:58:20.094: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq from kube-system started at 2021-05-04 14:46:10 +0000 UTC (1 container statuses recorded) May 4 16:58:20.094: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 16:58:20.094: INFO: nginx-proxy-node1 from kube-system started at 2021-05-04 14:51:11 +0000 UTC (1 container statuses recorded) May 4 16:58:20.094: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:58:20.094: INFO: node-feature-discovery-worker-wfgl5 from kube-system started at 2021-05-04 14:51:40 +0000 UTC (1 container statuses recorded) May 4 16:58:20.094: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:58:20.094: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt from kube-system started at 2021-05-04 14:52:50 +0000 UTC (1 container statuses recorded) May 4 16:58:20.094: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:58:20.094: INFO: collectd-4755t from monitoring started at 2021-05-04 15:01:51 +0000 UTC (3 container statuses recorded) May 4 16:58:20.094: INFO: Container collectd ready: true, restart count 0 May 4 16:58:20.094: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:58:20.094: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:58:20.094: INFO: node-exporter-k8qd9 from monitoring started at 2021-05-04 14:56:10 +0000 UTC (2 container statuses recorded) May 4 16:58:20.094: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:58:20.094: INFO: Container node-exporter ready: true, restart count 0 May 4 16:58:20.094: INFO: prometheus-k8s-0 from monitoring started at 2021-05-04 14:56:12 +0000 UTC (5 container statuses recorded) May 4 16:58:20.094: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 16:58:20.094: INFO: Container grafana ready: true, restart count 0 May 4 16:58:20.094: INFO: Container prometheus ready: true, restart count 1 May 4 16:58:20.094: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 16:58:20.094: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 16:58:20.094: INFO: prometheus-operator-5bb8cb9d8f-rrrhf from monitoring started at 2021-05-04 14:56:03 +0000 UTC (2 container statuses recorded) May 4 16:58:20.094: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:58:20.094: INFO: Container prometheus-operator ready: true, restart count 0 May 4 16:58:20.094: INFO: Logging pods the apiserver thinks is on node node2 before test May 4 16:58:20.110: INFO: liveness-exec from examples-6137 started at 2021-05-04 15:33:56 +0000 UTC (1 container statuses recorded) May 4 16:58:20.110: INFO: Container liveness-exec ready: true, restart count 8 May 4 16:58:20.110: INFO: cmk-2fmbx from kube-system started at 2021-05-04 14:55:14 +0000 UTC (2 container statuses recorded) May 4 16:58:20.110: INFO: Container nodereport ready: true, restart count 0 May 4 16:58:20.110: INFO: Container reconcile ready: true, restart count 0 May 4 16:58:20.110: INFO: cmk-init-discover-node2-zlxzj from kube-system started at 2021-05-04 14:54:52 +0000 UTC (3 container statuses recorded) May 4 16:58:20.110: INFO: Container discover ready: false, restart count 0 May 4 16:58:20.110: INFO: Container init ready: false, restart count 0 May 4 16:58:20.110: INFO: Container install ready: false, restart count 0 May 4 16:58:20.110: INFO: cmk-webhook-6c9d5f8578-fr595 from kube-system started at 2021-05-04 14:55:15 +0000 UTC (1 container statuses recorded) May 4 16:58:20.110: INFO: Container cmk-webhook ready: true, restart count 0 May 4 16:58:20.110: INFO: kube-flannel-lnwkk from kube-system started at 2021-05-04 14:45:37 +0000 UTC (1 container statuses recorded) May 4 16:58:20.110: INFO: Container kube-flannel ready: true, restart count 2 May 4 16:58:20.110: INFO: kube-multus-ds-amd64-7r2s4 from kube-system started at 2021-05-04 14:45:46 +0000 UTC (1 container statuses recorded) May 4 16:58:20.110: INFO: Container kube-multus ready: true, restart count 1 May 4 16:58:20.110: INFO: kube-proxy-rfjjf from kube-system started at 2021-05-04 14:45:01 +0000 UTC (1 container statuses recorded) May 4 16:58:20.110: INFO: Container kube-proxy ready: true, restart count 2 May 4 16:58:20.110: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb from kube-system started at 2021-05-04 14:46:10 +0000 UTC (1 container statuses recorded) May 4 16:58:20.110: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 16:58:20.110: INFO: nginx-proxy-node2 from kube-system started at 2021-05-04 14:51:11 +0000 UTC (1 container statuses recorded) May 4 16:58:20.110: INFO: Container nginx-proxy ready: true, restart count 2 May 4 16:58:20.110: INFO: node-feature-discovery-worker-jzjqs from kube-system started at 2021-05-04 14:51:40 +0000 UTC (1 container statuses recorded) May 4 16:58:20.110: INFO: Container nfd-worker ready: true, restart count 0 May 4 16:58:20.110: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 from kube-system started at 2021-05-04 14:52:50 +0000 UTC (1 container statuses recorded) May 4 16:58:20.110: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 16:58:20.110: INFO: collectd-dhwfp from monitoring started at 2021-05-04 15:01:51 +0000 UTC (3 container statuses recorded) May 4 16:58:20.110: INFO: Container collectd ready: true, restart count 0 May 4 16:58:20.110: INFO: Container collectd-exporter ready: true, restart count 0 May 4 16:58:20.110: INFO: Container rbac-proxy ready: true, restart count 0 May 4 16:58:20.110: INFO: node-exporter-5lghf from monitoring started at 2021-05-04 14:56:10 +0000 UTC (2 container statuses recorded) May 4 16:58:20.110: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 16:58:20.110: INFO: Container node-exporter ready: true, restart count 0 May 4 16:58:20.110: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x from monitoring started at 2021-05-04 14:59:02 +0000 UTC (2 container statuses recorded) May 4 16:58:20.110: INFO: Container tas-controller ready: true, restart count 0 May 4 16:58:20.110: INFO: Container tas-extender ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 May 4 16:58:20.161: INFO: Pod liveness-exec requesting resource cpu=0m on Node node2 May 4 16:58:20.161: INFO: Pod liveness-http requesting resource cpu=0m on Node node1 May 4 16:58:20.161: INFO: Pod cmk-2fmbx requesting resource cpu=0m on Node node2 May 4 16:58:20.161: INFO: Pod cmk-slg76 requesting resource cpu=0m on Node node1 May 4 16:58:20.161: INFO: Pod cmk-webhook-6c9d5f8578-fr595 requesting resource cpu=0m on Node node2 May 4 16:58:20.161: INFO: Pod kube-flannel-d6pbl requesting resource cpu=150m on Node node1 May 4 16:58:20.161: INFO: Pod kube-flannel-lnwkk requesting resource cpu=150m on Node node2 May 4 16:58:20.161: INFO: Pod kube-multus-ds-amd64-7r2s4 requesting resource cpu=100m on Node node2 May 4 16:58:20.161: INFO: Pod kube-multus-ds-amd64-pkmbz requesting resource cpu=100m on Node node1 May 4 16:58:20.161: INFO: Pod kube-proxy-rfjjf requesting resource cpu=0m on Node node2 May 4 16:58:20.161: INFO: Pod kube-proxy-t2mbn requesting resource cpu=0m on Node node1 May 4 16:58:20.161: INFO: Pod kubernetes-dashboard-86c6f9df5b-hwbpb requesting resource cpu=50m on Node node2 May 4 16:58:20.161: INFO: Pod kubernetes-metrics-scraper-678c97765c-6qwqq requesting resource cpu=0m on Node node1 May 4 16:58:20.161: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 May 4 16:58:20.161: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 May 4 16:58:20.161: INFO: Pod node-feature-discovery-worker-jzjqs requesting resource cpu=0m on Node node2 May 4 16:58:20.161: INFO: Pod node-feature-discovery-worker-wfgl5 requesting resource cpu=0m on Node node1 May 4 16:58:20.161: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt requesting resource cpu=0m on Node node1 May 4 16:58:20.161: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 requesting resource cpu=0m on Node node2 May 4 16:58:20.161: INFO: Pod collectd-4755t requesting resource cpu=0m on Node node1 May 4 16:58:20.161: INFO: Pod collectd-dhwfp requesting resource cpu=0m on Node node2 May 4 16:58:20.161: INFO: Pod node-exporter-5lghf requesting resource cpu=112m on Node node2 May 4 16:58:20.161: INFO: Pod node-exporter-k8qd9 requesting resource cpu=112m on Node node1 May 4 16:58:20.161: INFO: Pod prometheus-k8s-0 requesting resource cpu=300m on Node node1 May 4 16:58:20.161: INFO: Pod prometheus-operator-5bb8cb9d8f-rrrhf requesting resource cpu=100m on Node node1 May 4 16:58:20.161: INFO: Pod tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x requesting resource cpu=0m on Node node2 STEP: Starting Pods to consume most of the cluster CPU. May 4 16:58:20.161: INFO: Creating a pod which consumes cpu=53349m on Node node1 May 4 16:58:20.176: INFO: Creating a pod which consumes cpu=53594m on Node node2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-00b8f48a-c905-4441-ad20-e97a684da31e.167beb91487ec0cf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8196/filler-pod-00b8f48a-c905-4441-ad20-e97a684da31e to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-00b8f48a-c905-4441-ad20-e97a684da31e.167beb91a2c1f565], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.35/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-00b8f48a-c905-4441-ad20-e97a684da31e.167beb91a3741cac], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [filler-pod-00b8f48a-c905-4441-ad20-e97a684da31e.167beb91c1ed8ca5], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 511.260968ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-00b8f48a-c905-4441-ad20-e97a684da31e.167beb91c871d919], Reason = [Created], Message = [Created container filler-pod-00b8f48a-c905-4441-ad20-e97a684da31e] STEP: Considering event: Type = [Normal], Name = [filler-pod-00b8f48a-c905-4441-ad20-e97a684da31e.167beb91ce65c36c], Reason = [Started], Message = [Started container filler-pod-00b8f48a-c905-4441-ad20-e97a684da31e] STEP: Considering event: Type = [Normal], Name = [filler-pod-d79c2727-abc0-461e-ba7a-19c0d1da2e13.167beb9147f057b0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8196/filler-pod-d79c2727-abc0-461e-ba7a-19c0d1da2e13 to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-d79c2727-abc0-461e-ba7a-19c0d1da2e13.167beb919bcc5691], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.234/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-d79c2727-abc0-461e-ba7a-19c0d1da2e13.167beb919ca844f2], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [filler-pod-d79c2727-abc0-461e-ba7a-19c0d1da2e13.167beb91ba196ead], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 493.941758ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-d79c2727-abc0-461e-ba7a-19c0d1da2e13.167beb91c0d88633], Reason = [Created], Message = [Created container filler-pod-d79c2727-abc0-461e-ba7a-19c0d1da2e13] STEP: Considering event: Type = [Normal], Name = [filler-pod-d79c2727-abc0-461e-ba7a-19c0d1da2e13.167beb91c6ec154e], Reason = [Started], Message = [Started container filler-pod-d79c2727-abc0-461e-ba7a-19c0d1da2e13] STEP: Considering event: Type = [Warning], Name = [additional-pod.167beb923855e40e], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [additional-pod.167beb9238a9be95], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:58:25.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8196" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.193 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":10,"skipped":3430,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:58:25.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 May 4 16:58:25.280: INFO: Waiting up to 1m0s for all nodes to be ready May 4 16:59:25.339: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. May 4 16:59:25.365: INFO: Created pod: pod0-sched-preemption-low-priority May 4 16:59:25.386: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 16:59:53.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3611" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:88.224 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":11,"skipped":4095,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 16:59:53.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 4 16:59:53.788: INFO: Pod name wrapped-volume-race-d397d515-cf2c-4c62-819f-281d8c01753c: Found 3 pods out of 5 May 4 16:59:58.799: INFO: Pod name wrapped-volume-race-d397d515-cf2c-4c62-819f-281d8c01753c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d397d515-cf2c-4c62-819f-281d8c01753c in namespace emptydir-wrapper-4168, will wait for the garbage collector to delete the pods May 4 17:00:12.882: INFO: Deleting ReplicationController wrapped-volume-race-d397d515-cf2c-4c62-819f-281d8c01753c took: 6.190566ms May 4 17:00:13.583: INFO: Terminating ReplicationController wrapped-volume-race-d397d515-cf2c-4c62-819f-281d8c01753c pods took: 700.457093ms STEP: Creating RC which spawns configmap-volume pods May 4 17:00:30.000: INFO: Pod name wrapped-volume-race-6d23765a-1828-401b-a118-9045d5deaf5f: Found 0 pods out of 5 May 4 17:00:35.011: INFO: Pod name wrapped-volume-race-6d23765a-1828-401b-a118-9045d5deaf5f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6d23765a-1828-401b-a118-9045d5deaf5f in namespace emptydir-wrapper-4168, will wait for the garbage collector to delete the pods May 4 17:00:51.112: INFO: Deleting ReplicationController wrapped-volume-race-6d23765a-1828-401b-a118-9045d5deaf5f took: 7.58739ms May 4 17:00:51.813: INFO: Terminating ReplicationController wrapped-volume-race-6d23765a-1828-401b-a118-9045d5deaf5f pods took: 700.45055ms STEP: Creating RC which spawns configmap-volume pods May 4 17:00:59.927: INFO: Pod name wrapped-volume-race-2e4c874e-ca6d-453b-97cb-16cf9d1f9caa: Found 0 pods out of 5 May 4 17:01:04.933: INFO: Pod name wrapped-volume-race-2e4c874e-ca6d-453b-97cb-16cf9d1f9caa: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2e4c874e-ca6d-453b-97cb-16cf9d1f9caa in namespace emptydir-wrapper-4168, will wait for the garbage collector to delete the pods May 4 17:01:21.015: INFO: Deleting ReplicationController wrapped-volume-race-2e4c874e-ca6d-453b-97cb-16cf9d1f9caa took: 6.578404ms May 4 17:01:21.715: INFO: Terminating ReplicationController wrapped-volume-race-2e4c874e-ca6d-453b-97cb-16cf9d1f9caa pods took: 700.167653ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 17:01:30.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4168" for this suite. • [SLOW TEST:96.780 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":12,"skipped":4393,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 17:01:30.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 4 17:01:30.284: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 4 17:01:30.291: INFO: Waiting for terminating namespaces to be deleted... May 4 17:01:30.293: INFO: Logging pods the apiserver thinks is on node node1 before test May 4 17:01:30.303: INFO: liveness-http from examples-6137 started at 2021-05-04 15:33:56 +0000 UTC (1 container statuses recorded) May 4 17:01:30.303: INFO: Container liveness-http ready: false, restart count 31 May 4 17:01:30.303: INFO: cmk-init-discover-node1-m8vvw from kube-system started at 2021-05-04 14:54:32 +0000 UTC (3 container statuses recorded) May 4 17:01:30.303: INFO: Container discover ready: false, restart count 0 May 4 17:01:30.303: INFO: Container init ready: false, restart count 0 May 4 17:01:30.303: INFO: Container install ready: false, restart count 0 May 4 17:01:30.303: INFO: cmk-slg76 from kube-system started at 2021-05-04 14:55:14 +0000 UTC (2 container statuses recorded) May 4 17:01:30.303: INFO: Container nodereport ready: true, restart count 0 May 4 17:01:30.303: INFO: Container reconcile ready: true, restart count 0 May 4 17:01:30.303: INFO: kube-flannel-d6pbl from kube-system started at 2021-05-04 14:45:37 +0000 UTC (1 container statuses recorded) May 4 17:01:30.303: INFO: Container kube-flannel ready: true, restart count 2 May 4 17:01:30.303: INFO: kube-multus-ds-amd64-pkmbz from kube-system started at 2021-05-04 14:45:46 +0000 UTC (1 container statuses recorded) May 4 17:01:30.303: INFO: Container kube-multus ready: true, restart count 1 May 4 17:01:30.303: INFO: kube-proxy-t2mbn from kube-system started at 2021-05-04 14:45:01 +0000 UTC (1 container statuses recorded) May 4 17:01:30.303: INFO: Container kube-proxy ready: true, restart count 1 May 4 17:01:30.303: INFO: kubernetes-metrics-scraper-678c97765c-6qwqq from kube-system started at 2021-05-04 14:46:10 +0000 UTC (1 container statuses recorded) May 4 17:01:30.303: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 4 17:01:30.303: INFO: nginx-proxy-node1 from kube-system started at 2021-05-04 14:51:11 +0000 UTC (1 container statuses recorded) May 4 17:01:30.303: INFO: Container nginx-proxy ready: true, restart count 2 May 4 17:01:30.303: INFO: node-feature-discovery-worker-wfgl5 from kube-system started at 2021-05-04 14:51:40 +0000 UTC (1 container statuses recorded) May 4 17:01:30.303: INFO: Container nfd-worker ready: true, restart count 0 May 4 17:01:30.303: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-hvrmt from kube-system started at 2021-05-04 14:52:50 +0000 UTC (1 container statuses recorded) May 4 17:01:30.303: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 17:01:30.303: INFO: collectd-4755t from monitoring started at 2021-05-04 15:01:51 +0000 UTC (3 container statuses recorded) May 4 17:01:30.303: INFO: Container collectd ready: true, restart count 0 May 4 17:01:30.303: INFO: Container collectd-exporter ready: true, restart count 0 May 4 17:01:30.303: INFO: Container rbac-proxy ready: true, restart count 0 May 4 17:01:30.303: INFO: node-exporter-k8qd9 from monitoring started at 2021-05-04 14:56:10 +0000 UTC (2 container statuses recorded) May 4 17:01:30.303: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 17:01:30.303: INFO: Container node-exporter ready: true, restart count 0 May 4 17:01:30.303: INFO: prometheus-k8s-0 from monitoring started at 2021-05-04 14:56:12 +0000 UTC (5 container statuses recorded) May 4 17:01:30.303: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 4 17:01:30.303: INFO: Container grafana ready: true, restart count 0 May 4 17:01:30.303: INFO: Container prometheus ready: true, restart count 1 May 4 17:01:30.303: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 4 17:01:30.303: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 4 17:01:30.303: INFO: prometheus-operator-5bb8cb9d8f-rrrhf from monitoring started at 2021-05-04 14:56:03 +0000 UTC (2 container statuses recorded) May 4 17:01:30.303: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 17:01:30.303: INFO: Container prometheus-operator ready: true, restart count 0 May 4 17:01:30.303: INFO: Logging pods the apiserver thinks is on node node2 before test May 4 17:01:30.325: INFO: liveness-exec from examples-6137 started at 2021-05-04 15:33:56 +0000 UTC (1 container statuses recorded) May 4 17:01:30.325: INFO: Container liveness-exec ready: false, restart count 8 May 4 17:01:30.325: INFO: cmk-2fmbx from kube-system started at 2021-05-04 14:55:14 +0000 UTC (2 container statuses recorded) May 4 17:01:30.325: INFO: Container nodereport ready: true, restart count 0 May 4 17:01:30.325: INFO: Container reconcile ready: true, restart count 0 May 4 17:01:30.325: INFO: cmk-init-discover-node2-zlxzj from kube-system started at 2021-05-04 14:54:52 +0000 UTC (3 container statuses recorded) May 4 17:01:30.325: INFO: Container discover ready: false, restart count 0 May 4 17:01:30.325: INFO: Container init ready: false, restart count 0 May 4 17:01:30.325: INFO: Container install ready: false, restart count 0 May 4 17:01:30.325: INFO: cmk-webhook-6c9d5f8578-fr595 from kube-system started at 2021-05-04 14:55:15 +0000 UTC (1 container statuses recorded) May 4 17:01:30.325: INFO: Container cmk-webhook ready: true, restart count 0 May 4 17:01:30.325: INFO: kube-flannel-lnwkk from kube-system started at 2021-05-04 14:45:37 +0000 UTC (1 container statuses recorded) May 4 17:01:30.325: INFO: Container kube-flannel ready: true, restart count 2 May 4 17:01:30.325: INFO: kube-multus-ds-amd64-7r2s4 from kube-system started at 2021-05-04 14:45:46 +0000 UTC (1 container statuses recorded) May 4 17:01:30.325: INFO: Container kube-multus ready: true, restart count 1 May 4 17:01:30.325: INFO: kube-proxy-rfjjf from kube-system started at 2021-05-04 14:45:01 +0000 UTC (1 container statuses recorded) May 4 17:01:30.325: INFO: Container kube-proxy ready: true, restart count 2 May 4 17:01:30.325: INFO: kubernetes-dashboard-86c6f9df5b-hwbpb from kube-system started at 2021-05-04 14:46:10 +0000 UTC (1 container statuses recorded) May 4 17:01:30.325: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 4 17:01:30.325: INFO: nginx-proxy-node2 from kube-system started at 2021-05-04 14:51:11 +0000 UTC (1 container statuses recorded) May 4 17:01:30.325: INFO: Container nginx-proxy ready: true, restart count 2 May 4 17:01:30.325: INFO: node-feature-discovery-worker-jzjqs from kube-system started at 2021-05-04 14:51:40 +0000 UTC (1 container statuses recorded) May 4 17:01:30.325: INFO: Container nfd-worker ready: true, restart count 0 May 4 17:01:30.325: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wt4b2 from kube-system started at 2021-05-04 14:52:50 +0000 UTC (1 container statuses recorded) May 4 17:01:30.325: INFO: Container kube-sriovdp ready: true, restart count 0 May 4 17:01:30.325: INFO: collectd-dhwfp from monitoring started at 2021-05-04 15:01:51 +0000 UTC (3 container statuses recorded) May 4 17:01:30.325: INFO: Container collectd ready: true, restart count 0 May 4 17:01:30.325: INFO: Container collectd-exporter ready: true, restart count 0 May 4 17:01:30.325: INFO: Container rbac-proxy ready: true, restart count 0 May 4 17:01:30.325: INFO: node-exporter-5lghf from monitoring started at 2021-05-04 14:56:10 +0000 UTC (2 container statuses recorded) May 4 17:01:30.325: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 4 17:01:30.325: INFO: Container node-exporter ready: true, restart count 0 May 4 17:01:30.325: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-4nd7x from monitoring started at 2021-05-04 14:59:02 +0000 UTC (2 container statuses recorded) May 4 17:01:30.325: INFO: Container tas-controller ready: true, restart count 0 May 4 17:01:30.325: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.167bebbd8f844457], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.167bebbd8fda3c5c], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 17:01:31.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-394" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":13,"skipped":4506,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 17:01:31.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 May 4 17:01:31.400: INFO: Waiting up to 1m0s for all nodes to be ready May 4 17:02:31.451: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 17:02:31.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 4 17:02:35.507: INFO: found a healthy node: node1 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 4 17:02:53.564: INFO: pods created so far: [1 1 1] May 4 17:02:53.564: INFO: length of pods created so far: 3 May 4 17:03:05.579: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 17:03:12.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-1228" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 17:03:12.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8821" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:101.287 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":14,"skipped":4724,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 17:03:12.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 4 17:03:12.719: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:12.719: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:12.719: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:12.721: INFO: Number of nodes with available pods: 0 May 4 17:03:12.721: INFO: Node node1 is running more than one daemon pod May 4 17:03:13.726: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:13.726: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:13.726: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:13.729: INFO: Number of nodes with available pods: 0 May 4 17:03:13.729: INFO: Node node1 is running more than one daemon pod May 4 17:03:14.727: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:14.727: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:14.727: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:14.729: INFO: Number of nodes with available pods: 0 May 4 17:03:14.729: INFO: Node node1 is running more than one daemon pod May 4 17:03:15.728: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:15.728: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:15.728: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:15.730: INFO: Number of nodes with available pods: 0 May 4 17:03:15.730: INFO: Node node1 is running more than one daemon pod May 4 17:03:16.726: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:16.726: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:16.726: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:16.729: INFO: Number of nodes with available pods: 1 May 4 17:03:16.729: INFO: Node node2 is running more than one daemon pod May 4 17:03:17.726: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:17.726: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:17.726: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:17.729: INFO: Number of nodes with available pods: 2 May 4 17:03:17.729: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 4 17:03:17.742: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:17.742: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:17.742: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:17.745: INFO: Number of nodes with available pods: 1 May 4 17:03:17.745: INFO: Node node1 is running more than one daemon pod May 4 17:03:18.753: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:18.753: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:18.753: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:18.756: INFO: Number of nodes with available pods: 1 May 4 17:03:18.756: INFO: Node node1 is running more than one daemon pod May 4 17:03:19.752: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:19.752: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:19.752: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:19.754: INFO: Number of nodes with available pods: 1 May 4 17:03:19.754: INFO: Node node1 is running more than one daemon pod May 4 17:03:20.750: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:20.750: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:20.750: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:20.753: INFO: Number of nodes with available pods: 1 May 4 17:03:20.753: INFO: Node node1 is running more than one daemon pod May 4 17:03:21.750: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:21.750: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:21.751: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:21.753: INFO: Number of nodes with available pods: 1 May 4 17:03:21.753: INFO: Node node1 is running more than one daemon pod May 4 17:03:22.753: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:22.753: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:22.753: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 17:03:22.756: INFO: Number of nodes with available pods: 2 May 4 17:03:22.756: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5840, will wait for the garbage collector to delete the pods May 4 17:03:22.819: INFO: Deleting DaemonSet.extensions daemon-set took: 5.029662ms May 4 17:03:23.519: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.351246ms May 4 17:03:29.922: INFO: Number of nodes with available pods: 0 May 4 17:03:29.922: INFO: Number of running nodes: 0, number of available pods: 0 May 4 17:03:29.924: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5840/daemonsets","resourceVersion":"54034"},"items":null} May 4 17:03:29.927: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5840/pods","resourceVersion":"54034"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 17:03:29.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5840" for this suite. • [SLOW TEST:17.284 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":15,"skipped":4809,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 4 17:03:29.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 May 4 17:03:29.972: INFO: Waiting up to 1m0s for all nodes to be ready May 4 17:04:30.019: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. May 4 17:04:30.043: INFO: Created pod: pod0-sched-preemption-low-priority May 4 17:04:30.063: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 4 17:04:54.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3188" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:84.197 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":16,"skipped":4881,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 4 17:04:54.151: INFO: Running AfterSuite actions on all nodes May 4 17:04:54.151: INFO: Running AfterSuite actions on node 1 May 4 17:04:54.151: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":16,"skipped":5467,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]"]} Summarizing 1 Failure: [Fail] [sig-apps] Daemon set [Serial] [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:433 Ran 17 of 5484 Specs in 1152.592 seconds FAIL! -- 16 Passed | 1 Failed | 0 Pending | 5467 Skipped --- FAIL: TestE2E (1152.68s) FAIL Ginkgo ran 1 suite in 19m13.8163572s Test Suite Failed