I0506 22:22:05.831716 22 e2e.go:129] Starting e2e run "1057b32d-d7b1-42c3-bb6f-a45d4408d584" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1651875724 - Will randomize all specs Will run 17 of 5773 specs May 6 22:22:05.896: INFO: >>> kubeConfig: /root/.kube/config May 6 22:22:05.901: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 6 22:22:05.931: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 6 22:22:05.996: INFO: The status of Pod cmk-init-discover-node1-tp69t is Succeeded, skipping waiting May 6 22:22:05.996: INFO: The status of Pod cmk-init-discover-node2-kt2nj is Succeeded, skipping waiting May 6 22:22:05.996: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 6 22:22:05.996: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 6 22:22:05.996: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 6 22:22:06.014: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 6 22:22:06.014: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 6 22:22:06.014: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 6 22:22:06.014: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 6 22:22:06.014: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 6 22:22:06.014: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 6 22:22:06.014: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 6 22:22:06.014: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 6 22:22:06.014: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 6 22:22:06.014: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 6 22:22:06.014: INFO: e2e test version: v1.21.9 May 6 22:22:06.015: INFO: kube-apiserver version: v1.21.1 May 6 22:22:06.015: INFO: >>> kubeConfig: /root/.kube/config May 6 22:22:06.020: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:22:06.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred W0506 22:22:06.049028 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 22:22:06.049: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 22:22:06.052: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 6 22:22:06.054: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 22:22:06.063: INFO: Waiting for terminating namespaces to be deleted... May 6 22:22:06.065: INFO: Logging pods the apiserver thinks is on node node1 before test May 6 22:22:06.077: INFO: cmk-init-discover-node1-tp69t from kube-system started at 2022-05-06 20:21:33 +0000 UTC (3 container statuses recorded) May 6 22:22:06.077: INFO: Container discover ready: false, restart count 0 May 6 22:22:06.077: INFO: Container init ready: false, restart count 0 May 6 22:22:06.077: INFO: Container install ready: false, restart count 0 May 6 22:22:06.077: INFO: cmk-trkp8 from kube-system started at 2022-05-06 20:22:16 +0000 UTC (2 container statuses recorded) May 6 22:22:06.077: INFO: Container nodereport ready: true, restart count 0 May 6 22:22:06.077: INFO: Container reconcile ready: true, restart count 0 May 6 22:22:06.077: INFO: kube-flannel-ph67x from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 6 22:22:06.077: INFO: Container kube-flannel ready: true, restart count 3 May 6 22:22:06.077: INFO: kube-multus-ds-amd64-2mv45 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 6 22:22:06.077: INFO: Container kube-multus ready: true, restart count 1 May 6 22:22:06.077: INFO: kube-proxy-xc75d from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 6 22:22:06.078: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:22:06.078: INFO: nginx-proxy-node1 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 6 22:22:06.078: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:22:06.078: INFO: node-feature-discovery-worker-fbf8d from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 6 22:22:06.078: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:22:06.078: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 6 22:22:06.078: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:22:06.078: INFO: collectd-wq9cz from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 6 22:22:06.078: INFO: Container collectd ready: true, restart count 0 May 6 22:22:06.078: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:22:06.078: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:22:06.078: INFO: node-exporter-hqs4s from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 6 22:22:06.078: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:22:06.078: INFO: Container node-exporter ready: true, restart count 0 May 6 22:22:06.078: INFO: prometheus-k8s-0 from monitoring started at 2022-05-06 20:23:29 +0000 UTC (4 container statuses recorded) May 6 22:22:06.078: INFO: Container config-reloader ready: true, restart count 0 May 6 22:22:06.078: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 6 22:22:06.078: INFO: Container grafana ready: true, restart count 0 May 6 22:22:06.078: INFO: Container prometheus ready: true, restart count 1 May 6 22:22:06.078: INFO: prometheus-operator-585ccfb458-vrrfv from monitoring started at 2022-05-06 20:23:12 +0000 UTC (2 container statuses recorded) May 6 22:22:06.078: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:22:06.078: INFO: Container prometheus-operator ready: true, restart count 0 May 6 22:22:06.078: INFO: Logging pods the apiserver thinks is on node node2 before test May 6 22:22:06.095: INFO: cmk-cb5rv from kube-system started at 2022-05-06 20:22:17 +0000 UTC (2 container statuses recorded) May 6 22:22:06.095: INFO: Container nodereport ready: true, restart count 0 May 6 22:22:06.095: INFO: Container reconcile ready: true, restart count 0 May 6 22:22:06.095: INFO: cmk-init-discover-node2-kt2nj from kube-system started at 2022-05-06 20:21:53 +0000 UTC (3 container statuses recorded) May 6 22:22:06.095: INFO: Container discover ready: false, restart count 0 May 6 22:22:06.095: INFO: Container init ready: false, restart count 0 May 6 22:22:06.095: INFO: Container install ready: false, restart count 0 May 6 22:22:06.095: INFO: cmk-webhook-6c9d5f8578-vllpr from kube-system started at 2022-05-06 20:22:17 +0000 UTC (1 container statuses recorded) May 6 22:22:06.095: INFO: Container cmk-webhook ready: true, restart count 0 May 6 22:22:06.095: INFO: kube-flannel-ffwfn from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 6 22:22:06.095: INFO: Container kube-flannel ready: true, restart count 2 May 6 22:22:06.095: INFO: kube-multus-ds-amd64-gtzj9 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 6 22:22:06.095: INFO: Container kube-multus ready: true, restart count 1 May 6 22:22:06.095: INFO: kube-proxy-g77fj from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 6 22:22:06.095: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:22:06.095: INFO: kubernetes-dashboard-785dcbb76d-29wg6 from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 6 22:22:06.095: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 6 22:22:06.096: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 6 22:22:06.096: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 6 22:22:06.096: INFO: nginx-proxy-node2 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 6 22:22:06.096: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:22:06.096: INFO: node-feature-discovery-worker-8phhs from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 6 22:22:06.096: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:22:06.096: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 6 22:22:06.096: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:22:06.096: INFO: collectd-mbz88 from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 6 22:22:06.096: INFO: Container collectd ready: true, restart count 0 May 6 22:22:06.096: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:22:06.096: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:22:06.096: INFO: node-exporter-4xqmj from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 6 22:22:06.096: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:22:06.096: INFO: Container node-exporter ready: true, restart count 0 May 6 22:22:06.096: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 from monitoring started at 2022-05-06 20:26:21 +0000 UTC (1 container statuses recorded) May 6 22:22:06.096: INFO: Container tas-extender ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 May 6 22:22:06.149: INFO: Pod cmk-cb5rv requesting resource cpu=0m on Node node2 May 6 22:22:06.149: INFO: Pod cmk-trkp8 requesting resource cpu=0m on Node node1 May 6 22:22:06.149: INFO: Pod cmk-webhook-6c9d5f8578-vllpr requesting resource cpu=0m on Node node2 May 6 22:22:06.149: INFO: Pod kube-flannel-ffwfn requesting resource cpu=150m on Node node2 May 6 22:22:06.149: INFO: Pod kube-flannel-ph67x requesting resource cpu=150m on Node node1 May 6 22:22:06.149: INFO: Pod kube-multus-ds-amd64-2mv45 requesting resource cpu=100m on Node node1 May 6 22:22:06.149: INFO: Pod kube-multus-ds-amd64-gtzj9 requesting resource cpu=100m on Node node2 May 6 22:22:06.149: INFO: Pod kube-proxy-g77fj requesting resource cpu=0m on Node node2 May 6 22:22:06.149: INFO: Pod kube-proxy-xc75d requesting resource cpu=0m on Node node1 May 6 22:22:06.149: INFO: Pod kubernetes-dashboard-785dcbb76d-29wg6 requesting resource cpu=50m on Node node2 May 6 22:22:06.150: INFO: Pod kubernetes-metrics-scraper-5558854cb-4ztpz requesting resource cpu=0m on Node node2 May 6 22:22:06.150: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 May 6 22:22:06.150: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 May 6 22:22:06.150: INFO: Pod node-feature-discovery-worker-8phhs requesting resource cpu=0m on Node node2 May 6 22:22:06.150: INFO: Pod node-feature-discovery-worker-fbf8d requesting resource cpu=0m on Node node1 May 6 22:22:06.150: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h requesting resource cpu=0m on Node node2 May 6 22:22:06.150: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 requesting resource cpu=0m on Node node1 May 6 22:22:06.150: INFO: Pod collectd-mbz88 requesting resource cpu=0m on Node node2 May 6 22:22:06.150: INFO: Pod collectd-wq9cz requesting resource cpu=0m on Node node1 May 6 22:22:06.150: INFO: Pod node-exporter-4xqmj requesting resource cpu=112m on Node node2 May 6 22:22:06.150: INFO: Pod node-exporter-hqs4s requesting resource cpu=112m on Node node1 May 6 22:22:06.150: INFO: Pod prometheus-k8s-0 requesting resource cpu=200m on Node node1 May 6 22:22:06.150: INFO: Pod prometheus-operator-585ccfb458-vrrfv requesting resource cpu=100m on Node node1 May 6 22:22:06.150: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 requesting resource cpu=0m on Node node2 STEP: Starting Pods to consume most of the cluster CPU. May 6 22:22:06.150: INFO: Creating a pod which consumes cpu=53419m on Node node1 May 6 22:22:06.161: INFO: Creating a pod which consumes cpu=53594m on Node node2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-737a9bb0-b717-4785-8be4-cc9c36b54e30.16eca4388f8564e7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8803/filler-pod-737a9bb0-b717-4785-8be4-cc9c36b54e30 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-737a9bb0-b717-4785-8be4-cc9c36b54e30.16eca438e6e1fc71], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-737a9bb0-b717-4785-8be4-cc9c36b54e30.16eca4393099e7fa], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.236781184s] STEP: Considering event: Type = [Normal], Name = [filler-pod-737a9bb0-b717-4785-8be4-cc9c36b54e30.16eca4393fb2cd0a], Reason = [Created], Message = [Created container filler-pod-737a9bb0-b717-4785-8be4-cc9c36b54e30] STEP: Considering event: Type = [Normal], Name = [filler-pod-737a9bb0-b717-4785-8be4-cc9c36b54e30.16eca43946b63ba5], Reason = [Started], Message = [Started container filler-pod-737a9bb0-b717-4785-8be4-cc9c36b54e30] STEP: Considering event: Type = [Normal], Name = [filler-pod-89674b4c-31fa-4ac2-9b95-49082415bd24.16eca4388edc3d84], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8803/filler-pod-89674b4c-31fa-4ac2-9b95-49082415bd24 to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-89674b4c-31fa-4ac2-9b95-49082415bd24.16eca438e7af5350], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-89674b4c-31fa-4ac2-9b95-49082415bd24.16eca439023e6b25], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 445.576139ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-89674b4c-31fa-4ac2-9b95-49082415bd24.16eca4390bc8dd16], Reason = [Created], Message = [Created container filler-pod-89674b4c-31fa-4ac2-9b95-49082415bd24] STEP: Considering event: Type = [Normal], Name = [filler-pod-89674b4c-31fa-4ac2-9b95-49082415bd24.16eca43912da5dd7], Reason = [Started], Message = [Started container filler-pod-89674b4c-31fa-4ac2-9b95-49082415bd24] STEP: Considering event: Type = [Warning], Name = [additional-pod.16eca4397f3f9084], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:22:11.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8803" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.214 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":1,"skipped":144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:22:11.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:22:17.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1862" for this suite. STEP: Destroying namespace "nsdeletetest-5131" for this suite. May 6 22:22:17.337: INFO: Namespace nsdeletetest-5131 was already deleted STEP: Destroying namespace "nsdeletetest-5761" for this suite. • [SLOW TEST:6.100 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":2,"skipped":491,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:22:17.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:22:48.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4467" for this suite. STEP: Destroying namespace "nsdeletetest-6013" for this suite. May 6 22:22:48.444: INFO: Namespace nsdeletetest-6013 was already deleted STEP: Destroying namespace "nsdeletetest-6575" for this suite. • [SLOW TEST:31.107 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":3,"skipped":522,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:22:48.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 6 22:22:48.755: INFO: Pod name wrapped-volume-race-6d996562-0a7b-4b4e-86b6-1eb744caeda2: Found 3 pods out of 5 May 6 22:22:53.764: INFO: Pod name wrapped-volume-race-6d996562-0a7b-4b4e-86b6-1eb744caeda2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6d996562-0a7b-4b4e-86b6-1eb744caeda2 in namespace emptydir-wrapper-5262, will wait for the garbage collector to delete the pods May 6 22:23:07.853: INFO: Deleting ReplicationController wrapped-volume-race-6d996562-0a7b-4b4e-86b6-1eb744caeda2 took: 6.183759ms May 6 22:23:07.953: INFO: Terminating ReplicationController wrapped-volume-race-6d996562-0a7b-4b4e-86b6-1eb744caeda2 pods took: 100.186055ms STEP: Creating RC which spawns configmap-volume pods May 6 22:23:16.969: INFO: Pod name wrapped-volume-race-9ab12622-575e-427f-8abf-4defd11ab95e: Found 0 pods out of 5 May 6 22:23:21.976: INFO: Pod name wrapped-volume-race-9ab12622-575e-427f-8abf-4defd11ab95e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9ab12622-575e-427f-8abf-4defd11ab95e in namespace emptydir-wrapper-5262, will wait for the garbage collector to delete the pods May 6 22:23:36.059: INFO: Deleting ReplicationController wrapped-volume-race-9ab12622-575e-427f-8abf-4defd11ab95e took: 4.723378ms May 6 22:23:36.159: INFO: Terminating ReplicationController wrapped-volume-race-9ab12622-575e-427f-8abf-4defd11ab95e pods took: 100.331861ms STEP: Creating RC which spawns configmap-volume pods May 6 22:23:46.877: INFO: Pod name wrapped-volume-race-689bccf2-8add-455c-83e6-4653c4870989: Found 0 pods out of 5 May 6 22:23:51.886: INFO: Pod name wrapped-volume-race-689bccf2-8add-455c-83e6-4653c4870989: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-689bccf2-8add-455c-83e6-4653c4870989 in namespace emptydir-wrapper-5262, will wait for the garbage collector to delete the pods May 6 22:24:05.965: INFO: Deleting ReplicationController wrapped-volume-race-689bccf2-8add-455c-83e6-4653c4870989 took: 4.925701ms May 6 22:24:06.066: INFO: Terminating ReplicationController wrapped-volume-race-689bccf2-8add-455c-83e6-4653c4870989 pods took: 101.081161ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:24:17.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5262" for this suite. • [SLOW TEST:88.589 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":4,"skipped":593,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:24:17.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 6 22:24:17.078: INFO: Waiting up to 1m0s for all nodes to be ready May 6 22:25:17.133: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. May 6 22:25:17.161: INFO: Created pod: pod0-sched-preemption-low-priority May 6 22:25:17.180: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:25:41.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4181" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:84.217 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":5,"skipped":933,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:25:41.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 6 22:25:41.299: INFO: Waiting up to 1m0s for all nodes to be ready May 6 22:26:41.351: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. May 6 22:26:41.381: INFO: Created pod: pod0-sched-preemption-low-priority May 6 22:26:41.405: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:27:01.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2860" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:80.229 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":6,"skipped":1300,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:27:01.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 6 22:27:01.533: INFO: Waiting up to 1m0s for all nodes to be ready May 6 22:28:01.584: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:28:01.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 6 22:28:05.641: INFO: found a healthy node: node2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:28:25.700: INFO: pods created so far: [1 1 1] May 6 22:28:25.700: INFO: length of pods created so far: 3 May 6 22:28:41.715: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:28:48.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-8555" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:28:48.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7918" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:107.298 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":7,"skipped":1654,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:28:48.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 6 22:28:48.851: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:48.851: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:48.851: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:48.853: INFO: Number of nodes with available pods: 0 May 6 22:28:48.853: INFO: Node node1 is running more than one daemon pod May 6 22:28:49.858: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:49.858: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:49.858: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:49.861: INFO: Number of nodes with available pods: 0 May 6 22:28:49.861: INFO: Node node1 is running more than one daemon pod May 6 22:28:50.858: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:50.858: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:50.859: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:50.861: INFO: Number of nodes with available pods: 0 May 6 22:28:50.861: INFO: Node node1 is running more than one daemon pod May 6 22:28:51.860: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:51.860: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:51.860: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:51.862: INFO: Number of nodes with available pods: 2 May 6 22:28:51.862: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 6 22:28:51.878: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:51.878: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:51.878: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:51.880: INFO: Number of nodes with available pods: 1 May 6 22:28:51.880: INFO: Node node2 is running more than one daemon pod May 6 22:28:52.885: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:52.885: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:52.885: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:52.887: INFO: Number of nodes with available pods: 1 May 6 22:28:52.887: INFO: Node node2 is running more than one daemon pod May 6 22:28:53.887: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:53.887: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:53.887: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:53.890: INFO: Number of nodes with available pods: 1 May 6 22:28:53.890: INFO: Node node2 is running more than one daemon pod May 6 22:28:54.886: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:54.886: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:54.886: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:54.891: INFO: Number of nodes with available pods: 1 May 6 22:28:54.891: INFO: Node node2 is running more than one daemon pod May 6 22:28:55.886: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:55.886: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:55.886: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:55.889: INFO: Number of nodes with available pods: 1 May 6 22:28:55.889: INFO: Node node2 is running more than one daemon pod May 6 22:28:56.889: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:56.889: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:56.889: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:28:56.892: INFO: Number of nodes with available pods: 2 May 6 22:28:56.892: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8980, will wait for the garbage collector to delete the pods May 6 22:28:56.955: INFO: Deleting DaemonSet.extensions daemon-set took: 5.693658ms May 6 22:28:57.055: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.667376ms May 6 22:29:06.858: INFO: Number of nodes with available pods: 0 May 6 22:29:06.858: INFO: Number of running nodes: 0, number of available pods: 0 May 6 22:29:06.864: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"53769"},"items":null} May 6 22:29:06.867: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"53770"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:29:06.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8980" for this suite. • [SLOW TEST:18.085 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":8,"skipped":1677,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:29:06.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 6 22:29:06.917: INFO: Waiting up to 1m0s for all nodes to be ready May 6 22:30:06.971: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:30:06.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:30:07.013: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. May 6 22:30:07.017: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:30:07.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7063" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:30:07.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7347" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.202 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":9,"skipped":1736,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:30:07.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 6 22:30:07.114: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 22:30:07.122: INFO: Waiting for terminating namespaces to be deleted... May 6 22:30:07.124: INFO: Logging pods the apiserver thinks is on node node1 before test May 6 22:30:07.132: INFO: cmk-init-discover-node1-tp69t from kube-system started at 2022-05-06 20:21:33 +0000 UTC (3 container statuses recorded) May 6 22:30:07.132: INFO: Container discover ready: false, restart count 0 May 6 22:30:07.132: INFO: Container init ready: false, restart count 0 May 6 22:30:07.132: INFO: Container install ready: false, restart count 0 May 6 22:30:07.132: INFO: cmk-trkp8 from kube-system started at 2022-05-06 20:22:16 +0000 UTC (2 container statuses recorded) May 6 22:30:07.132: INFO: Container nodereport ready: true, restart count 0 May 6 22:30:07.132: INFO: Container reconcile ready: true, restart count 0 May 6 22:30:07.132: INFO: kube-flannel-ph67x from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 6 22:30:07.132: INFO: Container kube-flannel ready: true, restart count 3 May 6 22:30:07.132: INFO: kube-multus-ds-amd64-2mv45 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 6 22:30:07.132: INFO: Container kube-multus ready: true, restart count 1 May 6 22:30:07.132: INFO: kube-proxy-xc75d from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 6 22:30:07.132: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:30:07.132: INFO: nginx-proxy-node1 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 6 22:30:07.132: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:30:07.132: INFO: node-feature-discovery-worker-fbf8d from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 6 22:30:07.132: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:30:07.132: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 6 22:30:07.132: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:30:07.132: INFO: collectd-wq9cz from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 6 22:30:07.132: INFO: Container collectd ready: true, restart count 0 May 6 22:30:07.132: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:30:07.132: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:30:07.132: INFO: node-exporter-hqs4s from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 6 22:30:07.132: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:30:07.132: INFO: Container node-exporter ready: true, restart count 0 May 6 22:30:07.132: INFO: prometheus-k8s-0 from monitoring started at 2022-05-06 20:23:29 +0000 UTC (4 container statuses recorded) May 6 22:30:07.132: INFO: Container config-reloader ready: true, restart count 0 May 6 22:30:07.132: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 6 22:30:07.132: INFO: Container grafana ready: true, restart count 0 May 6 22:30:07.132: INFO: Container prometheus ready: true, restart count 1 May 6 22:30:07.132: INFO: prometheus-operator-585ccfb458-vrrfv from monitoring started at 2022-05-06 20:23:12 +0000 UTC (2 container statuses recorded) May 6 22:30:07.132: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:30:07.132: INFO: Container prometheus-operator ready: true, restart count 0 May 6 22:30:07.132: INFO: Logging pods the apiserver thinks is on node node2 before test May 6 22:30:07.142: INFO: cmk-cb5rv from kube-system started at 2022-05-06 20:22:17 +0000 UTC (2 container statuses recorded) May 6 22:30:07.142: INFO: Container nodereport ready: true, restart count 0 May 6 22:30:07.142: INFO: Container reconcile ready: true, restart count 0 May 6 22:30:07.142: INFO: cmk-init-discover-node2-kt2nj from kube-system started at 2022-05-06 20:21:53 +0000 UTC (3 container statuses recorded) May 6 22:30:07.142: INFO: Container discover ready: false, restart count 0 May 6 22:30:07.142: INFO: Container init ready: false, restart count 0 May 6 22:30:07.142: INFO: Container install ready: false, restart count 0 May 6 22:30:07.142: INFO: cmk-webhook-6c9d5f8578-vllpr from kube-system started at 2022-05-06 20:22:17 +0000 UTC (1 container statuses recorded) May 6 22:30:07.142: INFO: Container cmk-webhook ready: true, restart count 0 May 6 22:30:07.142: INFO: kube-flannel-ffwfn from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 6 22:30:07.142: INFO: Container kube-flannel ready: true, restart count 2 May 6 22:30:07.142: INFO: kube-multus-ds-amd64-gtzj9 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 6 22:30:07.142: INFO: Container kube-multus ready: true, restart count 1 May 6 22:30:07.142: INFO: kube-proxy-g77fj from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 6 22:30:07.142: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:30:07.142: INFO: kubernetes-dashboard-785dcbb76d-29wg6 from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 6 22:30:07.142: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 6 22:30:07.142: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 6 22:30:07.142: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 6 22:30:07.142: INFO: nginx-proxy-node2 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 6 22:30:07.142: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:30:07.142: INFO: node-feature-discovery-worker-8phhs from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 6 22:30:07.142: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:30:07.142: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 6 22:30:07.142: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:30:07.142: INFO: collectd-mbz88 from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 6 22:30:07.142: INFO: Container collectd ready: true, restart count 0 May 6 22:30:07.142: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:30:07.142: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:30:07.142: INFO: node-exporter-4xqmj from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 6 22:30:07.142: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:30:07.142: INFO: Container node-exporter ready: true, restart count 0 May 6 22:30:07.142: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 from monitoring started at 2022-05-06 20:26:21 +0000 UTC (1 container statuses recorded) May 6 22:30:07.142: INFO: Container tas-extender ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d380752f-da83-4d27-8f38-47f80ed6dcf7 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.10.190.208 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-d380752f-da83-4d27-8f38-47f80ed6dcf7 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-d380752f-da83-4d27-8f38-47f80ed6dcf7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:35:15.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-306" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.154 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":10,"skipped":2116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:35:15.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 6 22:35:15.271: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 22:35:15.281: INFO: Waiting for terminating namespaces to be deleted... May 6 22:35:15.284: INFO: Logging pods the apiserver thinks is on node node1 before test May 6 22:35:15.295: INFO: cmk-init-discover-node1-tp69t from kube-system started at 2022-05-06 20:21:33 +0000 UTC (3 container statuses recorded) May 6 22:35:15.295: INFO: Container discover ready: false, restart count 0 May 6 22:35:15.295: INFO: Container init ready: false, restart count 0 May 6 22:35:15.295: INFO: Container install ready: false, restart count 0 May 6 22:35:15.295: INFO: cmk-trkp8 from kube-system started at 2022-05-06 20:22:16 +0000 UTC (2 container statuses recorded) May 6 22:35:15.295: INFO: Container nodereport ready: true, restart count 0 May 6 22:35:15.295: INFO: Container reconcile ready: true, restart count 0 May 6 22:35:15.295: INFO: kube-flannel-ph67x from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 6 22:35:15.295: INFO: Container kube-flannel ready: true, restart count 3 May 6 22:35:15.295: INFO: kube-multus-ds-amd64-2mv45 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 6 22:35:15.295: INFO: Container kube-multus ready: true, restart count 1 May 6 22:35:15.295: INFO: kube-proxy-xc75d from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 6 22:35:15.295: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:35:15.295: INFO: nginx-proxy-node1 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 6 22:35:15.295: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:35:15.295: INFO: node-feature-discovery-worker-fbf8d from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 6 22:35:15.295: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:35:15.295: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 6 22:35:15.295: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:35:15.295: INFO: collectd-wq9cz from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 6 22:35:15.295: INFO: Container collectd ready: true, restart count 0 May 6 22:35:15.295: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:35:15.295: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:35:15.295: INFO: node-exporter-hqs4s from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 6 22:35:15.296: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:35:15.296: INFO: Container node-exporter ready: true, restart count 0 May 6 22:35:15.296: INFO: prometheus-k8s-0 from monitoring started at 2022-05-06 20:23:29 +0000 UTC (4 container statuses recorded) May 6 22:35:15.296: INFO: Container config-reloader ready: true, restart count 0 May 6 22:35:15.296: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 6 22:35:15.296: INFO: Container grafana ready: true, restart count 0 May 6 22:35:15.296: INFO: Container prometheus ready: true, restart count 1 May 6 22:35:15.296: INFO: prometheus-operator-585ccfb458-vrrfv from monitoring started at 2022-05-06 20:23:12 +0000 UTC (2 container statuses recorded) May 6 22:35:15.296: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:35:15.296: INFO: Container prometheus-operator ready: true, restart count 0 May 6 22:35:15.296: INFO: Logging pods the apiserver thinks is on node node2 before test May 6 22:35:15.306: INFO: cmk-cb5rv from kube-system started at 2022-05-06 20:22:17 +0000 UTC (2 container statuses recorded) May 6 22:35:15.306: INFO: Container nodereport ready: true, restart count 0 May 6 22:35:15.306: INFO: Container reconcile ready: true, restart count 0 May 6 22:35:15.306: INFO: cmk-init-discover-node2-kt2nj from kube-system started at 2022-05-06 20:21:53 +0000 UTC (3 container statuses recorded) May 6 22:35:15.306: INFO: Container discover ready: false, restart count 0 May 6 22:35:15.306: INFO: Container init ready: false, restart count 0 May 6 22:35:15.306: INFO: Container install ready: false, restart count 0 May 6 22:35:15.306: INFO: cmk-webhook-6c9d5f8578-vllpr from kube-system started at 2022-05-06 20:22:17 +0000 UTC (1 container statuses recorded) May 6 22:35:15.306: INFO: Container cmk-webhook ready: true, restart count 0 May 6 22:35:15.306: INFO: kube-flannel-ffwfn from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 6 22:35:15.306: INFO: Container kube-flannel ready: true, restart count 2 May 6 22:35:15.306: INFO: kube-multus-ds-amd64-gtzj9 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 6 22:35:15.306: INFO: Container kube-multus ready: true, restart count 1 May 6 22:35:15.306: INFO: kube-proxy-g77fj from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 6 22:35:15.306: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:35:15.306: INFO: kubernetes-dashboard-785dcbb76d-29wg6 from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 6 22:35:15.306: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 6 22:35:15.306: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 6 22:35:15.306: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 6 22:35:15.306: INFO: nginx-proxy-node2 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 6 22:35:15.306: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:35:15.306: INFO: node-feature-discovery-worker-8phhs from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 6 22:35:15.306: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:35:15.306: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 6 22:35:15.306: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:35:15.306: INFO: collectd-mbz88 from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 6 22:35:15.306: INFO: Container collectd ready: true, restart count 0 May 6 22:35:15.306: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:35:15.306: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:35:15.306: INFO: node-exporter-4xqmj from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 6 22:35:15.306: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:35:15.307: INFO: Container node-exporter ready: true, restart count 0 May 6 22:35:15.307: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 from monitoring started at 2022-05-06 20:26:21 +0000 UTC (1 container statuses recorded) May 6 22:35:15.307: INFO: Container tas-extender ready: true, restart count 0 May 6 22:35:15.307: INFO: pod4 from sched-pred-306 started at 2022-05-06 22:30:11 +0000 UTC (1 container statuses recorded) May 6 22:35:15.307: INFO: Container agnhost ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-844ead93-ab9d-4e2f-8dc8-6ce8a320e140 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-844ead93-ab9d-4e2f-8dc8-6ce8a320e140 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-844ead93-ab9d-4e2f-8dc8-6ce8a320e140 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:35:23.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3548" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.150 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":11,"skipped":2282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:35:23.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 6 22:35:23.429: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 22:35:23.438: INFO: Waiting for terminating namespaces to be deleted... May 6 22:35:23.440: INFO: Logging pods the apiserver thinks is on node node1 before test May 6 22:35:23.450: INFO: cmk-init-discover-node1-tp69t from kube-system started at 2022-05-06 20:21:33 +0000 UTC (3 container statuses recorded) May 6 22:35:23.450: INFO: Container discover ready: false, restart count 0 May 6 22:35:23.450: INFO: Container init ready: false, restart count 0 May 6 22:35:23.450: INFO: Container install ready: false, restart count 0 May 6 22:35:23.450: INFO: cmk-trkp8 from kube-system started at 2022-05-06 20:22:16 +0000 UTC (2 container statuses recorded) May 6 22:35:23.450: INFO: Container nodereport ready: true, restart count 0 May 6 22:35:23.451: INFO: Container reconcile ready: true, restart count 0 May 6 22:35:23.451: INFO: kube-flannel-ph67x from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 6 22:35:23.451: INFO: Container kube-flannel ready: true, restart count 3 May 6 22:35:23.451: INFO: kube-multus-ds-amd64-2mv45 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 6 22:35:23.451: INFO: Container kube-multus ready: true, restart count 1 May 6 22:35:23.451: INFO: kube-proxy-xc75d from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 6 22:35:23.451: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:35:23.451: INFO: nginx-proxy-node1 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 6 22:35:23.451: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:35:23.451: INFO: node-feature-discovery-worker-fbf8d from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 6 22:35:23.451: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:35:23.451: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 6 22:35:23.451: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:35:23.451: INFO: collectd-wq9cz from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 6 22:35:23.451: INFO: Container collectd ready: true, restart count 0 May 6 22:35:23.451: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:35:23.451: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:35:23.451: INFO: node-exporter-hqs4s from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 6 22:35:23.451: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:35:23.451: INFO: Container node-exporter ready: true, restart count 0 May 6 22:35:23.451: INFO: prometheus-k8s-0 from monitoring started at 2022-05-06 20:23:29 +0000 UTC (4 container statuses recorded) May 6 22:35:23.451: INFO: Container config-reloader ready: true, restart count 0 May 6 22:35:23.451: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 6 22:35:23.451: INFO: Container grafana ready: true, restart count 0 May 6 22:35:23.451: INFO: Container prometheus ready: true, restart count 1 May 6 22:35:23.451: INFO: prometheus-operator-585ccfb458-vrrfv from monitoring started at 2022-05-06 20:23:12 +0000 UTC (2 container statuses recorded) May 6 22:35:23.451: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:35:23.451: INFO: Container prometheus-operator ready: true, restart count 0 May 6 22:35:23.451: INFO: with-labels from sched-pred-3548 started at 2022-05-06 22:35:19 +0000 UTC (1 container statuses recorded) May 6 22:35:23.451: INFO: Container with-labels ready: true, restart count 0 May 6 22:35:23.451: INFO: Logging pods the apiserver thinks is on node node2 before test May 6 22:35:23.459: INFO: cmk-cb5rv from kube-system started at 2022-05-06 20:22:17 +0000 UTC (2 container statuses recorded) May 6 22:35:23.459: INFO: Container nodereport ready: true, restart count 0 May 6 22:35:23.459: INFO: Container reconcile ready: true, restart count 0 May 6 22:35:23.459: INFO: cmk-init-discover-node2-kt2nj from kube-system started at 2022-05-06 20:21:53 +0000 UTC (3 container statuses recorded) May 6 22:35:23.459: INFO: Container discover ready: false, restart count 0 May 6 22:35:23.459: INFO: Container init ready: false, restart count 0 May 6 22:35:23.459: INFO: Container install ready: false, restart count 0 May 6 22:35:23.459: INFO: cmk-webhook-6c9d5f8578-vllpr from kube-system started at 2022-05-06 20:22:17 +0000 UTC (1 container statuses recorded) May 6 22:35:23.459: INFO: Container cmk-webhook ready: true, restart count 0 May 6 22:35:23.459: INFO: kube-flannel-ffwfn from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 6 22:35:23.459: INFO: Container kube-flannel ready: true, restart count 2 May 6 22:35:23.459: INFO: kube-multus-ds-amd64-gtzj9 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 6 22:35:23.459: INFO: Container kube-multus ready: true, restart count 1 May 6 22:35:23.459: INFO: kube-proxy-g77fj from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 6 22:35:23.459: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:35:23.459: INFO: kubernetes-dashboard-785dcbb76d-29wg6 from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 6 22:35:23.459: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 6 22:35:23.459: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 6 22:35:23.459: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 6 22:35:23.459: INFO: nginx-proxy-node2 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 6 22:35:23.459: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:35:23.459: INFO: node-feature-discovery-worker-8phhs from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 6 22:35:23.459: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:35:23.459: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 6 22:35:23.459: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:35:23.459: INFO: collectd-mbz88 from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 6 22:35:23.459: INFO: Container collectd ready: true, restart count 0 May 6 22:35:23.459: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:35:23.459: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:35:23.459: INFO: node-exporter-4xqmj from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 6 22:35:23.459: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:35:23.459: INFO: Container node-exporter ready: true, restart count 0 May 6 22:35:23.459: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 from monitoring started at 2022-05-06 20:26:21 +0000 UTC (1 container statuses recorded) May 6 22:35:23.459: INFO: Container tas-extender ready: true, restart count 0 May 6 22:35:23.459: INFO: pod4 from sched-pred-306 started at 2022-05-06 22:30:11 +0000 UTC (1 container statuses recorded) May 6 22:35:23.459: INFO: Container agnhost ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16eca4f23412a667], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:35:24.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9062" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":12,"skipped":2591,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:35:24.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:35:24.562: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 6 22:35:24.576: INFO: Number of nodes with available pods: 0 May 6 22:35:24.576: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 6 22:35:24.601: INFO: Number of nodes with available pods: 0 May 6 22:35:24.601: INFO: Node node1 is running more than one daemon pod May 6 22:35:25.605: INFO: Number of nodes with available pods: 0 May 6 22:35:25.605: INFO: Node node1 is running more than one daemon pod May 6 22:35:26.605: INFO: Number of nodes with available pods: 0 May 6 22:35:26.605: INFO: Node node1 is running more than one daemon pod May 6 22:35:27.604: INFO: Number of nodes with available pods: 0 May 6 22:35:27.605: INFO: Node node1 is running more than one daemon pod May 6 22:35:28.609: INFO: Number of nodes with available pods: 0 May 6 22:35:28.609: INFO: Node node1 is running more than one daemon pod May 6 22:35:29.605: INFO: Number of nodes with available pods: 0 May 6 22:35:29.605: INFO: Node node1 is running more than one daemon pod May 6 22:35:30.604: INFO: Number of nodes with available pods: 1 May 6 22:35:30.604: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 6 22:35:30.620: INFO: Number of nodes with available pods: 1 May 6 22:35:30.620: INFO: Number of running nodes: 0, number of available pods: 1 May 6 22:35:31.624: INFO: Number of nodes with available pods: 0 May 6 22:35:31.624: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 6 22:35:31.634: INFO: Number of nodes with available pods: 0 May 6 22:35:31.634: INFO: Node node1 is running more than one daemon pod May 6 22:35:32.637: INFO: Number of nodes with available pods: 0 May 6 22:35:32.637: INFO: Node node1 is running more than one daemon pod May 6 22:35:33.639: INFO: Number of nodes with available pods: 0 May 6 22:35:33.639: INFO: Node node1 is running more than one daemon pod May 6 22:35:34.638: INFO: Number of nodes with available pods: 0 May 6 22:35:34.639: INFO: Node node1 is running more than one daemon pod May 6 22:35:35.639: INFO: Number of nodes with available pods: 0 May 6 22:35:35.639: INFO: Node node1 is running more than one daemon pod May 6 22:35:36.638: INFO: Number of nodes with available pods: 0 May 6 22:35:36.638: INFO: Node node1 is running more than one daemon pod May 6 22:35:37.640: INFO: Number of nodes with available pods: 0 May 6 22:35:37.640: INFO: Node node1 is running more than one daemon pod May 6 22:35:38.641: INFO: Number of nodes with available pods: 0 May 6 22:35:38.641: INFO: Node node1 is running more than one daemon pod May 6 22:35:39.638: INFO: Number of nodes with available pods: 0 May 6 22:35:39.638: INFO: Node node1 is running more than one daemon pod May 6 22:35:40.640: INFO: Number of nodes with available pods: 0 May 6 22:35:40.640: INFO: Node node1 is running more than one daemon pod May 6 22:35:41.638: INFO: Number of nodes with available pods: 0 May 6 22:35:41.638: INFO: Node node1 is running more than one daemon pod May 6 22:35:42.638: INFO: Number of nodes with available pods: 0 May 6 22:35:42.638: INFO: Node node1 is running more than one daemon pod May 6 22:35:43.640: INFO: Number of nodes with available pods: 0 May 6 22:35:43.640: INFO: Node node1 is running more than one daemon pod May 6 22:35:44.638: INFO: Number of nodes with available pods: 0 May 6 22:35:44.638: INFO: Node node1 is running more than one daemon pod May 6 22:35:45.637: INFO: Number of nodes with available pods: 0 May 6 22:35:45.637: INFO: Node node1 is running more than one daemon pod May 6 22:35:46.640: INFO: Number of nodes with available pods: 0 May 6 22:35:46.640: INFO: Node node1 is running more than one daemon pod May 6 22:35:47.640: INFO: Number of nodes with available pods: 0 May 6 22:35:47.640: INFO: Node node1 is running more than one daemon pod May 6 22:35:48.640: INFO: Number of nodes with available pods: 0 May 6 22:35:48.640: INFO: Node node1 is running more than one daemon pod May 6 22:35:49.638: INFO: Number of nodes with available pods: 0 May 6 22:35:49.638: INFO: Node node1 is running more than one daemon pod May 6 22:35:50.640: INFO: Number of nodes with available pods: 1 May 6 22:35:50.640: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4761, will wait for the garbage collector to delete the pods May 6 22:35:50.704: INFO: Deleting DaemonSet.extensions daemon-set took: 6.004761ms May 6 22:35:50.804: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.232799ms May 6 22:35:56.707: INFO: Number of nodes with available pods: 0 May 6 22:35:56.707: INFO: Number of running nodes: 0, number of available pods: 0 May 6 22:35:56.709: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"55060"},"items":null} May 6 22:35:56.711: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"55060"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:35:56.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4761" for this suite. • [SLOW TEST:32.221 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":13,"skipped":2750,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:35:56.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:35:56.802: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 6 22:35:56.810: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:35:56.811: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:35:56.811: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:35:56.813: INFO: Number of nodes with available pods: 0 May 6 22:35:56.813: INFO: Node node1 is running more than one daemon pod May 6 22:35:57.819: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:35:57.819: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:35:57.819: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:35:57.822: INFO: Number of nodes with available pods: 0 May 6 22:35:57.822: INFO: Node node1 is running more than one daemon pod May 6 22:35:58.820: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:35:58.820: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:35:58.820: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:35:58.823: INFO: Number of nodes with available pods: 0 May 6 22:35:58.823: INFO: Node node1 is running more than one daemon pod May 6 22:35:59.818: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:35:59.818: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:35:59.818: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:35:59.823: INFO: Number of nodes with available pods: 2 May 6 22:35:59.823: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 6 22:35:59.847: INFO: Wrong image for pod: daemon-set-drvww. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 6 22:35:59.847: INFO: Wrong image for pod: daemon-set-qwx9k. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 6 22:35:59.852: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:35:59.852: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:35:59.852: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:00.858: INFO: Wrong image for pod: daemon-set-drvww. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 6 22:36:00.864: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:00.864: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:00.864: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:01.856: INFO: Wrong image for pod: daemon-set-drvww. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 6 22:36:01.862: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:01.862: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:01.862: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:02.859: INFO: Wrong image for pod: daemon-set-drvww. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 6 22:36:02.862: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:02.862: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:02.862: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:03.857: INFO: Wrong image for pod: daemon-set-drvww. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 6 22:36:03.857: INFO: Pod daemon-set-lkh8b is not available May 6 22:36:03.862: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:03.862: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:03.862: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:04.856: INFO: Wrong image for pod: daemon-set-drvww. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 6 22:36:04.856: INFO: Pod daemon-set-lkh8b is not available May 6 22:36:04.860: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:04.860: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:04.860: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:05.856: INFO: Wrong image for pod: daemon-set-drvww. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 6 22:36:05.856: INFO: Pod daemon-set-lkh8b is not available May 6 22:36:05.860: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:05.860: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:05.860: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:06.863: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:06.863: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:06.863: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:07.863: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:07.863: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:07.863: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:08.860: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:08.860: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:08.860: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:09.862: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:09.862: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:09.862: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:10.864: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:10.864: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:10.864: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:11.862: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:11.862: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:11.863: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:12.862: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:12.862: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:12.862: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:13.861: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:13.861: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:13.861: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:14.860: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:14.860: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:14.860: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:15.863: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:15.863: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:15.863: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:16.857: INFO: Pod daemon-set-b87mx is not available May 6 22:36:16.862: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:16.862: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:16.862: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 6 22:36:16.866: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:16.866: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:16.866: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:16.868: INFO: Number of nodes with available pods: 1 May 6 22:36:16.868: INFO: Node node1 is running more than one daemon pod May 6 22:36:17.876: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:17.876: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:17.876: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:17.879: INFO: Number of nodes with available pods: 1 May 6 22:36:17.879: INFO: Node node1 is running more than one daemon pod May 6 22:36:18.876: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:18.877: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:18.877: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:18.879: INFO: Number of nodes with available pods: 1 May 6 22:36:18.879: INFO: Node node1 is running more than one daemon pod May 6 22:36:19.874: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:19.874: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:19.874: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:19.876: INFO: Number of nodes with available pods: 2 May 6 22:36:19.877: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7812, will wait for the garbage collector to delete the pods May 6 22:36:19.949: INFO: Deleting DaemonSet.extensions daemon-set took: 4.716181ms May 6 22:36:20.049: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.849314ms May 6 22:36:26.853: INFO: Number of nodes with available pods: 0 May 6 22:36:26.853: INFO: Number of running nodes: 0, number of available pods: 0 May 6 22:36:26.856: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"55241"},"items":null} May 6 22:36:26.858: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"55241"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:36:26.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7812" for this suite. • [SLOW TEST:30.122 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":14,"skipped":3862,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:36:26.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:36:26.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-210" for this suite. STEP: Destroying namespace "nspatchtest-4140858c-6cb1-41c3-a2d0-46784b55a79f-9084" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":15,"skipped":4656,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:36:26.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 6 22:36:27.018: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:27.018: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:27.018: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:27.020: INFO: Number of nodes with available pods: 0 May 6 22:36:27.020: INFO: Node node1 is running more than one daemon pod May 6 22:36:28.025: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:28.025: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:28.025: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:28.027: INFO: Number of nodes with available pods: 0 May 6 22:36:28.027: INFO: Node node1 is running more than one daemon pod May 6 22:36:29.027: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:29.027: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:29.028: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:29.030: INFO: Number of nodes with available pods: 0 May 6 22:36:29.030: INFO: Node node1 is running more than one daemon pod May 6 22:36:30.026: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:30.026: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:30.026: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:30.029: INFO: Number of nodes with available pods: 1 May 6 22:36:30.029: INFO: Node node2 is running more than one daemon pod May 6 22:36:31.028: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:31.029: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:31.029: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:31.032: INFO: Number of nodes with available pods: 2 May 6 22:36:31.032: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 6 22:36:31.048: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:31.048: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:31.048: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:31.051: INFO: Number of nodes with available pods: 1 May 6 22:36:31.051: INFO: Node node1 is running more than one daemon pod May 6 22:36:32.058: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:32.058: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:32.058: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:32.061: INFO: Number of nodes with available pods: 1 May 6 22:36:32.061: INFO: Node node1 is running more than one daemon pod May 6 22:36:33.059: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:33.059: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:33.059: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:33.061: INFO: Number of nodes with available pods: 1 May 6 22:36:33.061: INFO: Node node1 is running more than one daemon pod May 6 22:36:34.059: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:34.059: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:34.059: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:34.062: INFO: Number of nodes with available pods: 1 May 6 22:36:34.062: INFO: Node node1 is running more than one daemon pod May 6 22:36:35.057: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:35.058: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:35.058: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:35.061: INFO: Number of nodes with available pods: 1 May 6 22:36:35.061: INFO: Node node1 is running more than one daemon pod May 6 22:36:36.058: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:36.058: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:36.058: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:36.060: INFO: Number of nodes with available pods: 1 May 6 22:36:36.060: INFO: Node node1 is running more than one daemon pod May 6 22:36:37.059: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:37.060: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:37.060: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:37.062: INFO: Number of nodes with available pods: 1 May 6 22:36:37.062: INFO: Node node1 is running more than one daemon pod May 6 22:36:38.058: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:38.058: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:38.058: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:38.061: INFO: Number of nodes with available pods: 1 May 6 22:36:38.061: INFO: Node node1 is running more than one daemon pod May 6 22:36:39.060: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:39.060: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:39.060: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:39.062: INFO: Number of nodes with available pods: 1 May 6 22:36:39.062: INFO: Node node1 is running more than one daemon pod May 6 22:36:40.057: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:40.057: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:40.057: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:40.060: INFO: Number of nodes with available pods: 1 May 6 22:36:40.060: INFO: Node node1 is running more than one daemon pod May 6 22:36:41.059: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:41.059: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:41.059: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:41.062: INFO: Number of nodes with available pods: 1 May 6 22:36:41.062: INFO: Node node1 is running more than one daemon pod May 6 22:36:42.059: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:42.060: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:42.060: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:42.062: INFO: Number of nodes with available pods: 2 May 6 22:36:42.062: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8049, will wait for the garbage collector to delete the pods May 6 22:36:42.124: INFO: Deleting DaemonSet.extensions daemon-set took: 6.389957ms May 6 22:36:42.224: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.346886ms May 6 22:36:46.828: INFO: Number of nodes with available pods: 0 May 6 22:36:46.828: INFO: Number of running nodes: 0, number of available pods: 0 May 6 22:36:46.830: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"55423"},"items":null} May 6 22:36:46.832: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"55423"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:36:46.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8049" for this suite. • [SLOW TEST:19.873 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":16,"skipped":5012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:36:46.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:36:46.895: INFO: Create a RollingUpdate DaemonSet May 6 22:36:46.898: INFO: Check that daemon pods launch on every node of the cluster May 6 22:36:46.903: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:46.903: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:46.903: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:46.905: INFO: Number of nodes with available pods: 0 May 6 22:36:46.905: INFO: Node node1 is running more than one daemon pod May 6 22:36:47.910: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:47.910: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:47.910: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:47.913: INFO: Number of nodes with available pods: 0 May 6 22:36:47.913: INFO: Node node1 is running more than one daemon pod May 6 22:36:48.910: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:48.910: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:48.910: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:48.913: INFO: Number of nodes with available pods: 0 May 6 22:36:48.913: INFO: Node node1 is running more than one daemon pod May 6 22:36:49.910: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:49.910: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:49.910: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:49.913: INFO: Number of nodes with available pods: 2 May 6 22:36:49.913: INFO: Number of running nodes: 2, number of available pods: 2 May 6 22:36:49.913: INFO: Update the DaemonSet to trigger a rollout May 6 22:36:49.919: INFO: Updating DaemonSet daemon-set May 6 22:36:56.935: INFO: Roll back the DaemonSet before rollout is complete May 6 22:36:56.942: INFO: Updating DaemonSet daemon-set May 6 22:36:56.942: INFO: Make sure DaemonSet rollback is complete May 6 22:36:56.945: INFO: Wrong image for pod: daemon-set-cqpwz. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. May 6 22:36:56.945: INFO: Pod daemon-set-cqpwz is not available May 6 22:36:56.949: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:56.949: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:56.949: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:57.957: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:57.957: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:57.957: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:58.962: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:58.962: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:58.962: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:59.958: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:59.958: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:36:59.958: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:00.960: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:00.960: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:00.960: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:01.959: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:01.959: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:01.959: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:02.961: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:02.961: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:02.961: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:03.961: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:03.961: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:03.961: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:04.957: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:04.957: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:04.957: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:05.959: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:05.959: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:05.959: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:06.953: INFO: Pod daemon-set-fc77w is not available May 6 22:37:06.959: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:06.959: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 22:37:06.959: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-355, will wait for the garbage collector to delete the pods May 6 22:37:07.025: INFO: Deleting DaemonSet.extensions daemon-set took: 7.03349ms May 6 22:37:07.126: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.910857ms May 6 22:37:16.830: INFO: Number of nodes with available pods: 0 May 6 22:37:16.830: INFO: Number of running nodes: 0, number of available pods: 0 May 6 22:37:16.832: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"55598"},"items":null} May 6 22:37:16.834: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"55598"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:37:16.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-355" for this suite. • [SLOW TEST:30.001 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":17,"skipped":5063,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 6 22:37:16.871: INFO: Running AfterSuite actions on all nodes May 6 22:37:16.871: INFO: Running AfterSuite actions on node 1 May 6 22:37:16.871: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5756,"failed":0} Ran 17 of 5773 Specs in 910.979 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5756 Skipped PASS Ginkgo ran 1 suite in 15m12.40187725s Test Suite Passed