I1023 01:12:48.975007 23 e2e.go:129] Starting e2e run "a692ebc5-aebe-425e-8cb2-4e7b07ac0f5f" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1634951567 - Will randomize all specs Will run 17 of 5770 specs Oct 23 01:12:49.036: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:12:49.041: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 23 01:12:49.069: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 01:12:49.142: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 01:12:49.142: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 01:12:49.142: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 01:12:49.142: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 01:12:49.142: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 23 01:12:49.153: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 23 01:12:49.153: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 23 01:12:49.153: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 23 01:12:49.153: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 23 01:12:49.153: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 23 01:12:49.153: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 23 01:12:49.153: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 23 01:12:49.153: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 23 01:12:49.153: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 23 01:12:49.153: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 23 01:12:49.153: INFO: e2e test version: v1.21.5 Oct 23 01:12:49.154: INFO: kube-apiserver version: v1.21.1 Oct 23 01:12:49.154: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:12:49.160: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:12:49.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred W1023 01:12:49.189898 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 01:12:49.190: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 01:12:49.193: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 01:12:49.195: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 01:12:49.204: INFO: Waiting for terminating namespaces to be deleted... Oct 23 01:12:49.205: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 01:12:49.215: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 01:12:49.215: INFO: Container discover ready: false, restart count 0 Oct 23 01:12:49.215: INFO: Container init ready: false, restart count 0 Oct 23 01:12:49.215: INFO: Container install ready: false, restart count 0 Oct 23 01:12:49.215: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 01:12:49.215: INFO: Container nodereport ready: true, restart count 0 Oct 23 01:12:49.215: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:12:49.215: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 01:12:49.215: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 01:12:49.215: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 01:12:49.215: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:12:49.215: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 01:12:49.215: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:12:49.215: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 01:12:49.215: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 01:12:49.215: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 01:12:49.215: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 01:12:49.215: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 01:12:49.215: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:12:49.215: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 01:12:49.215: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:12:49.215: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 01:12:49.215: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:12:49.215: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 01:12:49.215: INFO: Container collectd ready: true, restart count 0 Oct 23 01:12:49.215: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:12:49.215: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:12:49.216: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 01:12:49.216: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:12:49.216: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:12:49.216: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 01:12:49.216: INFO: Container config-reloader ready: true, restart count 0 Oct 23 01:12:49.216: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 01:12:49.216: INFO: Container grafana ready: true, restart count 0 Oct 23 01:12:49.216: INFO: Container prometheus ready: true, restart count 1 Oct 23 01:12:49.216: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 01:12:49.216: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:12:49.216: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 01:12:49.216: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 01:12:49.222: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 01:12:49.222: INFO: Container discover ready: false, restart count 0 Oct 23 01:12:49.222: INFO: Container init ready: false, restart count 0 Oct 23 01:12:49.222: INFO: Container install ready: false, restart count 0 Oct 23 01:12:49.222: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 01:12:49.222: INFO: Container nodereport ready: true, restart count 1 Oct 23 01:12:49.222: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:12:49.222: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 01:12:49.222: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 01:12:49.222: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 01:12:49.222: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 01:12:49.222: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 01:12:49.222: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:12:49.222: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 01:12:49.222: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:12:49.222: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 01:12:49.222: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:12:49.222: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 01:12:49.222: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:12:49.222: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 01:12:49.222: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:12:49.222: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 01:12:49.222: INFO: Container collectd ready: true, restart count 0 Oct 23 01:12:49.222: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:12:49.222: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:12:49.223: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 01:12:49.223: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:12:49.223: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:12:49.223: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 01:12:49.223: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16b083ca3664eafb], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:12:50.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3760" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":1,"skipped":393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:12:50.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 23 01:12:50.305: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 01:13:50.360: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Oct 23 01:13:50.389: INFO: Created pod: pod0-sched-preemption-low-priority Oct 23 01:13:50.408: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:14:18.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7035" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:88.228 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":2,"skipped":432,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:14:18.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:14:24.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6976" for this suite. STEP: Destroying namespace "nsdeletetest-2794" for this suite. Oct 23 01:14:24.600: INFO: Namespace nsdeletetest-2794 was already deleted STEP: Destroying namespace "nsdeletetest-8198" for this suite. • [SLOW TEST:6.102 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":3,"skipped":517,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:14:24.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:14:55.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1668" for this suite. STEP: Destroying namespace "nsdeletetest-8750" for this suite. Oct 23 01:14:55.714: INFO: Namespace nsdeletetest-8750 was already deleted STEP: Destroying namespace "nsdeletetest-6319" for this suite. • [SLOW TEST:31.096 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":4,"skipped":2180,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:14:55.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:14:55.776: INFO: Create a RollingUpdate DaemonSet Oct 23 01:14:55.779: INFO: Check that daemon pods launch on every node of the cluster Oct 23 01:14:55.783: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:14:55.783: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:14:55.783: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:14:55.785: INFO: Number of nodes with available pods: 0 Oct 23 01:14:55.785: INFO: Node node1 is running more than one daemon pod Oct 23 01:14:56.792: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:14:56.792: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:14:56.792: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:14:56.794: INFO: Number of nodes with available pods: 0 Oct 23 01:14:56.794: INFO: Node node1 is running more than one daemon pod Oct 23 01:14:57.791: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:14:57.791: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:14:57.791: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:14:57.793: INFO: Number of nodes with available pods: 0 Oct 23 01:14:57.793: INFO: Node node1 is running more than one daemon pod Oct 23 01:14:58.791: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:14:58.791: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:14:58.791: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:14:58.795: INFO: Number of nodes with available pods: 0 Oct 23 01:14:58.795: INFO: Node node1 is running more than one daemon pod Oct 23 01:14:59.792: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:14:59.792: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:14:59.792: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:14:59.795: INFO: Number of nodes with available pods: 2 Oct 23 01:14:59.795: INFO: Number of running nodes: 2, number of available pods: 2 Oct 23 01:14:59.795: INFO: Update the DaemonSet to trigger a rollout Oct 23 01:14:59.803: INFO: Updating DaemonSet daemon-set Oct 23 01:15:14.819: INFO: Roll back the DaemonSet before rollout is complete Oct 23 01:15:14.826: INFO: Updating DaemonSet daemon-set Oct 23 01:15:14.826: INFO: Make sure DaemonSet rollback is complete Oct 23 01:15:14.829: INFO: Wrong image for pod: daemon-set-4fg97. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Oct 23 01:15:14.829: INFO: Pod daemon-set-4fg97 is not available Oct 23 01:15:14.833: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:14.833: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:14.833: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:15.843: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:15.843: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:15.843: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:16.845: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:16.845: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:16.845: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:17.842: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:17.842: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:17.842: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:18.843: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:18.843: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:18.843: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:19.845: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:19.845: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:19.845: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:20.844: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:20.844: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:20.844: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:21.843: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:21.843: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:21.843: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:22.842: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:22.842: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:22.842: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:23.844: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:23.844: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:23.844: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:24.839: INFO: Pod daemon-set-gfgd9 is not available Oct 23 01:15:24.844: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:24.844: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:24.844: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2875, will wait for the garbage collector to delete the pods Oct 23 01:15:24.909: INFO: Deleting DaemonSet.extensions daemon-set took: 5.714034ms Oct 23 01:15:25.010: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.497298ms Oct 23 01:15:33.912: INFO: Number of nodes with available pods: 0 Oct 23 01:15:33.912: INFO: Number of running nodes: 0, number of available pods: 0 Oct 23 01:15:33.919: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"80456"},"items":null} Oct 23 01:15:33.923: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"80456"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:15:33.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2875" for this suite. • [SLOW TEST:38.225 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":5,"skipped":2326,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:15:33.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 01:15:33.970: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 01:15:33.979: INFO: Waiting for terminating namespaces to be deleted... Oct 23 01:15:33.981: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 01:15:33.996: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 01:15:33.996: INFO: Container discover ready: false, restart count 0 Oct 23 01:15:33.996: INFO: Container init ready: false, restart count 0 Oct 23 01:15:33.996: INFO: Container install ready: false, restart count 0 Oct 23 01:15:33.996: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 01:15:33.996: INFO: Container nodereport ready: true, restart count 0 Oct 23 01:15:33.996: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:15:33.996: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 01:15:33.996: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 01:15:33.996: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 01:15:33.996: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:15:33.996: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 01:15:33.996: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:15:33.996: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 01:15:33.996: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 01:15:33.996: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 01:15:33.996: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 01:15:33.996: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 01:15:33.996: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:15:33.996: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 01:15:33.996: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:15:33.996: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 01:15:33.996: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:15:33.996: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 01:15:33.996: INFO: Container collectd ready: true, restart count 0 Oct 23 01:15:33.996: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:15:33.996: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:15:33.996: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 01:15:33.996: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:15:33.996: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:15:33.996: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 01:15:33.996: INFO: Container config-reloader ready: true, restart count 0 Oct 23 01:15:33.996: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 01:15:33.996: INFO: Container grafana ready: true, restart count 0 Oct 23 01:15:33.996: INFO: Container prometheus ready: true, restart count 1 Oct 23 01:15:33.996: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 01:15:33.996: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:15:33.996: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 01:15:33.996: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 01:15:34.005: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 01:15:34.005: INFO: Container discover ready: false, restart count 0 Oct 23 01:15:34.006: INFO: Container init ready: false, restart count 0 Oct 23 01:15:34.006: INFO: Container install ready: false, restart count 0 Oct 23 01:15:34.006: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 01:15:34.006: INFO: Container nodereport ready: true, restart count 1 Oct 23 01:15:34.006: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:15:34.006: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 01:15:34.006: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 01:15:34.006: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 01:15:34.006: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 01:15:34.006: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 01:15:34.006: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:15:34.006: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 01:15:34.006: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:15:34.006: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 01:15:34.006: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:15:34.006: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 01:15:34.006: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:15:34.006: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 01:15:34.006: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:15:34.006: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 01:15:34.006: INFO: Container collectd ready: true, restart count 0 Oct 23 01:15:34.006: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:15:34.006: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:15:34.006: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 01:15:34.006: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:15:34.006: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:15:34.006: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 01:15:34.006: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5f285011-d228-4e4d-8fbb-8ab99653ca84 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-5f285011-d228-4e4d-8fbb-8ab99653ca84 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-5f285011-d228-4e4d-8fbb-8ab99653ca84 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:15:42.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3089" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.146 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":6,"skipped":2329,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:15:42.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 23 01:15:42.154: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:42.154: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:42.154: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:42.157: INFO: Number of nodes with available pods: 0 Oct 23 01:15:42.157: INFO: Node node1 is running more than one daemon pod Oct 23 01:15:43.161: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:43.162: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:43.162: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:43.164: INFO: Number of nodes with available pods: 0 Oct 23 01:15:43.164: INFO: Node node1 is running more than one daemon pod Oct 23 01:15:44.161: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:44.161: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:44.161: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:44.163: INFO: Number of nodes with available pods: 0 Oct 23 01:15:44.163: INFO: Node node1 is running more than one daemon pod Oct 23 01:15:45.164: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:45.164: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:45.164: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:45.168: INFO: Number of nodes with available pods: 1 Oct 23 01:15:45.168: INFO: Node node1 is running more than one daemon pod Oct 23 01:15:46.164: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:46.164: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:46.164: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:46.167: INFO: Number of nodes with available pods: 2 Oct 23 01:15:46.167: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Oct 23 01:15:46.183: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:46.183: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:46.184: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:46.186: INFO: Number of nodes with available pods: 1 Oct 23 01:15:46.186: INFO: Node node2 is running more than one daemon pod Oct 23 01:15:47.192: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:47.192: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:47.192: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:47.195: INFO: Number of nodes with available pods: 1 Oct 23 01:15:47.195: INFO: Node node2 is running more than one daemon pod Oct 23 01:15:48.192: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:48.192: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:48.192: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:48.195: INFO: Number of nodes with available pods: 1 Oct 23 01:15:48.195: INFO: Node node2 is running more than one daemon pod Oct 23 01:15:49.191: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:49.191: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:49.191: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:49.194: INFO: Number of nodes with available pods: 1 Oct 23 01:15:49.194: INFO: Node node2 is running more than one daemon pod Oct 23 01:15:50.196: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:50.197: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:50.197: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:50.199: INFO: Number of nodes with available pods: 1 Oct 23 01:15:50.199: INFO: Node node2 is running more than one daemon pod Oct 23 01:15:51.194: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:51.194: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:51.194: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:51.197: INFO: Number of nodes with available pods: 1 Oct 23 01:15:51.197: INFO: Node node2 is running more than one daemon pod Oct 23 01:15:52.195: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:52.195: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:52.195: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:52.197: INFO: Number of nodes with available pods: 1 Oct 23 01:15:52.197: INFO: Node node2 is running more than one daemon pod Oct 23 01:15:53.191: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:53.191: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:53.191: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:53.194: INFO: Number of nodes with available pods: 1 Oct 23 01:15:53.194: INFO: Node node2 is running more than one daemon pod Oct 23 01:15:54.192: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:54.192: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:54.192: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:54.195: INFO: Number of nodes with available pods: 1 Oct 23 01:15:54.195: INFO: Node node2 is running more than one daemon pod Oct 23 01:15:55.193: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:55.193: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:55.193: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:55.196: INFO: Number of nodes with available pods: 1 Oct 23 01:15:55.196: INFO: Node node2 is running more than one daemon pod Oct 23 01:15:56.192: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:56.192: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:56.192: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:56.195: INFO: Number of nodes with available pods: 1 Oct 23 01:15:56.195: INFO: Node node2 is running more than one daemon pod Oct 23 01:15:57.194: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:57.194: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:57.194: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:15:57.197: INFO: Number of nodes with available pods: 2 Oct 23 01:15:57.197: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4406, will wait for the garbage collector to delete the pods Oct 23 01:15:57.256: INFO: Deleting DaemonSet.extensions daemon-set took: 4.886913ms Oct 23 01:15:57.357: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.794532ms Oct 23 01:16:04.260: INFO: Number of nodes with available pods: 0 Oct 23 01:16:04.260: INFO: Number of running nodes: 0, number of available pods: 0 Oct 23 01:16:04.263: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"80684"},"items":null} Oct 23 01:16:04.266: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"80684"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:16:04.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4406" for this suite. • [SLOW TEST:22.185 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":7,"skipped":3116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:16:04.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Oct 23 01:16:04.602: INFO: Pod name wrapped-volume-race-f637c57d-c003-4c58-ae94-552732b3adb4: Found 1 pods out of 5 Oct 23 01:16:09.609: INFO: Pod name wrapped-volume-race-f637c57d-c003-4c58-ae94-552732b3adb4: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f637c57d-c003-4c58-ae94-552732b3adb4 in namespace emptydir-wrapper-2092, will wait for the garbage collector to delete the pods Oct 23 01:16:25.695: INFO: Deleting ReplicationController wrapped-volume-race-f637c57d-c003-4c58-ae94-552732b3adb4 took: 6.157451ms Oct 23 01:16:25.795: INFO: Terminating ReplicationController wrapped-volume-race-f637c57d-c003-4c58-ae94-552732b3adb4 pods took: 100.185654ms STEP: Creating RC which spawns configmap-volume pods Oct 23 01:16:34.311: INFO: Pod name wrapped-volume-race-75ab7c68-f9a5-4687-b731-4cc4e5b2be0b: Found 0 pods out of 5 Oct 23 01:16:39.318: INFO: Pod name wrapped-volume-race-75ab7c68-f9a5-4687-b731-4cc4e5b2be0b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-75ab7c68-f9a5-4687-b731-4cc4e5b2be0b in namespace emptydir-wrapper-2092, will wait for the garbage collector to delete the pods Oct 23 01:16:53.412: INFO: Deleting ReplicationController wrapped-volume-race-75ab7c68-f9a5-4687-b731-4cc4e5b2be0b took: 4.929469ms Oct 23 01:16:53.513: INFO: Terminating ReplicationController wrapped-volume-race-75ab7c68-f9a5-4687-b731-4cc4e5b2be0b pods took: 100.949898ms STEP: Creating RC which spawns configmap-volume pods Oct 23 01:17:03.927: INFO: Pod name wrapped-volume-race-ab91f4ec-1137-4730-aaf5-40e443ff7902: Found 0 pods out of 5 Oct 23 01:17:08.939: INFO: Pod name wrapped-volume-race-ab91f4ec-1137-4730-aaf5-40e443ff7902: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ab91f4ec-1137-4730-aaf5-40e443ff7902 in namespace emptydir-wrapper-2092, will wait for the garbage collector to delete the pods Oct 23 01:17:29.024: INFO: Deleting ReplicationController wrapped-volume-race-ab91f4ec-1137-4730-aaf5-40e443ff7902 took: 5.829392ms Oct 23 01:17:29.124: INFO: Terminating ReplicationController wrapped-volume-race-ab91f4ec-1137-4730-aaf5-40e443ff7902 pods took: 100.196807ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:17:44.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2092" for this suite. • [SLOW TEST:99.846 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":8,"skipped":3417,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:17:44.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 23 01:17:44.190: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:44.190: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:44.190: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:44.193: INFO: Number of nodes with available pods: 0 Oct 23 01:17:44.193: INFO: Node node1 is running more than one daemon pod Oct 23 01:17:45.199: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:45.199: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:45.199: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:45.202: INFO: Number of nodes with available pods: 0 Oct 23 01:17:45.202: INFO: Node node1 is running more than one daemon pod Oct 23 01:17:46.197: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:46.197: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:46.197: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:46.199: INFO: Number of nodes with available pods: 0 Oct 23 01:17:46.199: INFO: Node node1 is running more than one daemon pod Oct 23 01:17:47.201: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:47.201: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:47.201: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:47.203: INFO: Number of nodes with available pods: 1 Oct 23 01:17:47.203: INFO: Node node1 is running more than one daemon pod Oct 23 01:17:48.197: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:48.197: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:48.197: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:48.200: INFO: Number of nodes with available pods: 2 Oct 23 01:17:48.200: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Oct 23 01:17:48.216: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:48.216: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:48.216: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:48.219: INFO: Number of nodes with available pods: 1 Oct 23 01:17:48.219: INFO: Node node1 is running more than one daemon pod Oct 23 01:17:49.224: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:49.224: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:49.224: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:49.227: INFO: Number of nodes with available pods: 1 Oct 23 01:17:49.227: INFO: Node node1 is running more than one daemon pod Oct 23 01:17:50.227: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:50.227: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:50.227: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:50.230: INFO: Number of nodes with available pods: 1 Oct 23 01:17:50.230: INFO: Node node1 is running more than one daemon pod Oct 23 01:17:51.227: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:51.228: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:51.228: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:51.230: INFO: Number of nodes with available pods: 1 Oct 23 01:17:51.230: INFO: Node node1 is running more than one daemon pod Oct 23 01:17:52.227: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:52.228: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:52.228: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:17:52.231: INFO: Number of nodes with available pods: 2 Oct 23 01:17:52.231: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8253, will wait for the garbage collector to delete the pods Oct 23 01:17:52.295: INFO: Deleting DaemonSet.extensions daemon-set took: 5.958243ms Oct 23 01:17:52.395: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.282173ms Oct 23 01:18:04.298: INFO: Number of nodes with available pods: 0 Oct 23 01:18:04.298: INFO: Number of running nodes: 0, number of available pods: 0 Oct 23 01:18:04.301: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"81937"},"items":null} Oct 23 01:18:04.303: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"81937"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:18:04.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8253" for this suite. • [SLOW TEST:20.184 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":9,"skipped":3529,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:18:04.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 23 01:18:04.367: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 01:19:04.417: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Oct 23 01:19:04.442: INFO: Created pod: pod0-sched-preemption-low-priority Oct 23 01:19:04.461: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:19:18.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6684" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:74.209 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":10,"skipped":4217,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:19:18.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 23 01:19:18.574: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 01:20:18.631: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:20:18.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Oct 23 01:20:22.701: INFO: found a healthy node: node2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:20:36.766: INFO: pods created so far: [1 1 1] Oct 23 01:20:36.766: INFO: length of pods created so far: 3 Oct 23 01:20:48.781: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:20:55.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-890" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:20:55.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9256" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:97.316 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":11,"skipped":4241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:20:55.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:20:55.909: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Oct 23 01:20:55.917: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:55.917: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:55.917: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:55.919: INFO: Number of nodes with available pods: 0 Oct 23 01:20:55.919: INFO: Node node1 is running more than one daemon pod Oct 23 01:20:56.924: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:56.924: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:56.924: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:56.927: INFO: Number of nodes with available pods: 0 Oct 23 01:20:56.927: INFO: Node node1 is running more than one daemon pod Oct 23 01:20:57.925: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:57.925: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:57.925: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:57.928: INFO: Number of nodes with available pods: 0 Oct 23 01:20:57.928: INFO: Node node1 is running more than one daemon pod Oct 23 01:20:58.926: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:58.926: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:58.926: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:58.928: INFO: Number of nodes with available pods: 1 Oct 23 01:20:58.929: INFO: Node node1 is running more than one daemon pod Oct 23 01:20:59.924: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:59.924: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:59.924: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:59.927: INFO: Number of nodes with available pods: 2 Oct 23 01:20:59.927: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Oct 23 01:20:59.951: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:20:59.951: INFO: Wrong image for pod: daemon-set-qlqpk. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:20:59.958: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:59.958: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:20:59.958: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:00.964: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:21:00.967: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:00.967: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:00.967: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:01.963: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:21:01.968: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:01.968: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:01.968: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:02.961: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:21:02.965: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:02.965: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:02.965: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:03.965: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:21:03.969: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:03.969: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:03.969: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:04.964: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:21:04.968: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:04.968: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:04.968: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:05.963: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:21:05.968: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:05.968: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:05.968: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:06.964: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:21:06.969: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:06.969: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:06.969: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:07.962: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:21:07.966: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:07.966: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:07.966: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:08.964: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:21:08.968: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:08.968: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:08.968: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:09.962: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:21:09.967: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:09.967: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:09.967: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:10.965: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:21:10.969: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:10.969: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:10.969: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:11.963: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:21:11.969: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:11.969: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:11.969: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:12.962: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:21:12.966: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:12.966: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:12.966: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:13.962: INFO: Pod daemon-set-9xfct is not available Oct 23 01:21:13.962: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:21:13.966: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:13.966: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:13.966: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:14.964: INFO: Pod daemon-set-9xfct is not available Oct 23 01:21:14.964: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:21:14.968: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:14.968: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:14.968: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:15.964: INFO: Pod daemon-set-9xfct is not available Oct 23 01:21:15.964: INFO: Wrong image for pod: daemon-set-hwfqm. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:21:15.968: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:15.968: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:15.968: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:16.969: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:16.969: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:16.969: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:17.966: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:17.966: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:17.966: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:18.967: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:18.968: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:18.968: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:19.968: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:19.968: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:19.968: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:20.968: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:20.968: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:20.968: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:21.966: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:21.967: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:21.967: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:22.967: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:22.967: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:22.967: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:23.966: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:23.966: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:23.966: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:24.964: INFO: Pod daemon-set-8wdgj is not available Oct 23 01:21:24.969: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:24.969: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:24.969: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Oct 23 01:21:24.973: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:24.973: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:24.973: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:24.976: INFO: Number of nodes with available pods: 1 Oct 23 01:21:24.976: INFO: Node node2 is running more than one daemon pod Oct 23 01:21:25.981: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:25.981: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:25.981: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:25.984: INFO: Number of nodes with available pods: 1 Oct 23 01:21:25.984: INFO: Node node2 is running more than one daemon pod Oct 23 01:21:26.983: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:26.983: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:26.983: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:21:26.987: INFO: Number of nodes with available pods: 2 Oct 23 01:21:26.987: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1536, will wait for the garbage collector to delete the pods Oct 23 01:21:27.061: INFO: Deleting DaemonSet.extensions daemon-set took: 4.981272ms Oct 23 01:21:27.161: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.650296ms Oct 23 01:21:33.864: INFO: Number of nodes with available pods: 0 Oct 23 01:21:33.864: INFO: Number of running nodes: 0, number of available pods: 0 Oct 23 01:21:33.866: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"82861"},"items":null} Oct 23 01:21:33.869: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"82861"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:21:33.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1536" for this suite. • [SLOW TEST:38.028 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":12,"skipped":4477,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:21:33.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:21:33.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4573" for this suite. STEP: Destroying namespace "nspatchtest-1d5ed49c-c519-4e1c-a17a-5fb0e300a74d-7961" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":13,"skipped":4852,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:21:33.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:21:33.994: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Oct 23 01:21:33.998: INFO: Number of nodes with available pods: 0 Oct 23 01:21:33.998: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Oct 23 01:21:34.014: INFO: Number of nodes with available pods: 0 Oct 23 01:21:34.014: INFO: Node node2 is running more than one daemon pod Oct 23 01:21:35.018: INFO: Number of nodes with available pods: 0 Oct 23 01:21:35.018: INFO: Node node2 is running more than one daemon pod Oct 23 01:21:36.018: INFO: Number of nodes with available pods: 0 Oct 23 01:21:36.018: INFO: Node node2 is running more than one daemon pod Oct 23 01:21:37.019: INFO: Number of nodes with available pods: 0 Oct 23 01:21:37.019: INFO: Node node2 is running more than one daemon pod Oct 23 01:21:38.017: INFO: Number of nodes with available pods: 1 Oct 23 01:21:38.017: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Oct 23 01:21:38.035: INFO: Number of nodes with available pods: 1 Oct 23 01:21:38.035: INFO: Number of running nodes: 0, number of available pods: 1 Oct 23 01:21:39.037: INFO: Number of nodes with available pods: 0 Oct 23 01:21:39.037: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Oct 23 01:21:39.044: INFO: Number of nodes with available pods: 0 Oct 23 01:21:39.044: INFO: Node node2 is running more than one daemon pod Oct 23 01:21:40.051: INFO: Number of nodes with available pods: 0 Oct 23 01:21:40.051: INFO: Node node2 is running more than one daemon pod Oct 23 01:21:41.050: INFO: Number of nodes with available pods: 0 Oct 23 01:21:41.050: INFO: Node node2 is running more than one daemon pod Oct 23 01:21:42.051: INFO: Number of nodes with available pods: 0 Oct 23 01:21:42.051: INFO: Node node2 is running more than one daemon pod Oct 23 01:21:43.049: INFO: Number of nodes with available pods: 0 Oct 23 01:21:43.049: INFO: Node node2 is running more than one daemon pod Oct 23 01:21:44.051: INFO: Number of nodes with available pods: 0 Oct 23 01:21:44.051: INFO: Node node2 is running more than one daemon pod Oct 23 01:21:45.050: INFO: Number of nodes with available pods: 0 Oct 23 01:21:45.050: INFO: Node node2 is running more than one daemon pod Oct 23 01:21:46.051: INFO: Number of nodes with available pods: 0 Oct 23 01:21:46.051: INFO: Node node2 is running more than one daemon pod Oct 23 01:21:47.051: INFO: Number of nodes with available pods: 1 Oct 23 01:21:47.051: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9326, will wait for the garbage collector to delete the pods Oct 23 01:21:47.115: INFO: Deleting DaemonSet.extensions daemon-set took: 5.967847ms Oct 23 01:21:47.215: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.92616ms Oct 23 01:21:52.419: INFO: Number of nodes with available pods: 0 Oct 23 01:21:52.419: INFO: Number of running nodes: 0, number of available pods: 0 Oct 23 01:21:52.422: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"83024"},"items":null} Oct 23 01:21:52.424: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"83024"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:21:52.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9326" for this suite. • [SLOW TEST:18.489 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":14,"skipped":4898,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:21:52.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 01:21:52.482: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 01:21:52.489: INFO: Waiting for terminating namespaces to be deleted... Oct 23 01:21:52.491: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 01:21:52.502: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 01:21:52.502: INFO: Container discover ready: false, restart count 0 Oct 23 01:21:52.502: INFO: Container init ready: false, restart count 0 Oct 23 01:21:52.502: INFO: Container install ready: false, restart count 0 Oct 23 01:21:52.502: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 01:21:52.502: INFO: Container nodereport ready: true, restart count 0 Oct 23 01:21:52.502: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:21:52.502: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 01:21:52.502: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 01:21:52.502: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 01:21:52.502: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:21:52.502: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 01:21:52.502: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:21:52.502: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 01:21:52.502: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 01:21:52.502: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 01:21:52.502: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 01:21:52.502: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 01:21:52.502: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:21:52.502: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 01:21:52.502: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:21:52.502: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 01:21:52.502: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:21:52.502: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 01:21:52.502: INFO: Container collectd ready: true, restart count 0 Oct 23 01:21:52.502: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:21:52.502: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:21:52.502: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 01:21:52.502: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:21:52.502: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:21:52.502: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 01:21:52.502: INFO: Container config-reloader ready: true, restart count 0 Oct 23 01:21:52.502: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 01:21:52.502: INFO: Container grafana ready: true, restart count 0 Oct 23 01:21:52.502: INFO: Container prometheus ready: true, restart count 1 Oct 23 01:21:52.502: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 01:21:52.502: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:21:52.502: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 01:21:52.502: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 01:21:52.509: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 01:21:52.509: INFO: Container discover ready: false, restart count 0 Oct 23 01:21:52.509: INFO: Container init ready: false, restart count 0 Oct 23 01:21:52.509: INFO: Container install ready: false, restart count 0 Oct 23 01:21:52.509: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 01:21:52.509: INFO: Container nodereport ready: true, restart count 1 Oct 23 01:21:52.509: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:21:52.509: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 01:21:52.509: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 01:21:52.509: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 01:21:52.509: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 01:21:52.509: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 01:21:52.509: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:21:52.509: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 01:21:52.509: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:21:52.509: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 01:21:52.509: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:21:52.509: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 01:21:52.509: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:21:52.509: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 01:21:52.509: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:21:52.509: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 01:21:52.509: INFO: Container collectd ready: true, restart count 0 Oct 23 01:21:52.509: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:21:52.509: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:21:52.509: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 01:21:52.509: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:21:52.509: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:21:52.509: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 01:21:52.509: INFO: Container tas-extender ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7e8b0411-25f2-481c-98a0-9216b6c721c9 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.10.190.208 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-7e8b0411-25f2-481c-98a0-9216b6c721c9 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-7e8b0411-25f2-481c-98a0-9216b6c721c9 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:27:00.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-96" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.208 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":15,"skipped":4902,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:27:00.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 23 01:27:00.695: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 01:28:00.758: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:00.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:28:00.797: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Oct 23 01:28:00.800: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:00.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-3059" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:00.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9485" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.206 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":16,"skipped":4995,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:28:00.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 01:28:00.933: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 01:28:00.941: INFO: Waiting for terminating namespaces to be deleted... Oct 23 01:28:00.944: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 01:28:00.954: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 01:28:00.954: INFO: Container discover ready: false, restart count 0 Oct 23 01:28:00.954: INFO: Container init ready: false, restart count 0 Oct 23 01:28:00.954: INFO: Container install ready: false, restart count 0 Oct 23 01:28:00.954: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 01:28:00.954: INFO: Container nodereport ready: true, restart count 0 Oct 23 01:28:00.954: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:28:00.954: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 01:28:00.954: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 01:28:00.954: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 01:28:00.954: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:28:00.954: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 01:28:00.954: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:28:00.954: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 01:28:00.954: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 01:28:00.954: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 01:28:00.954: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 01:28:00.954: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 01:28:00.954: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:28:00.954: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 01:28:00.954: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:28:00.954: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 01:28:00.954: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:28:00.954: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 01:28:00.954: INFO: Container collectd ready: true, restart count 0 Oct 23 01:28:00.954: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:28:00.954: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:28:00.954: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 01:28:00.954: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:28:00.954: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:28:00.954: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 01:28:00.954: INFO: Container config-reloader ready: true, restart count 0 Oct 23 01:28:00.954: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 01:28:00.954: INFO: Container grafana ready: true, restart count 0 Oct 23 01:28:00.954: INFO: Container prometheus ready: true, restart count 1 Oct 23 01:28:00.954: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 01:28:00.954: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:28:00.954: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 01:28:00.954: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 01:28:00.963: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 01:28:00.963: INFO: Container discover ready: false, restart count 0 Oct 23 01:28:00.963: INFO: Container init ready: false, restart count 0 Oct 23 01:28:00.963: INFO: Container install ready: false, restart count 0 Oct 23 01:28:00.963: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 01:28:00.963: INFO: Container nodereport ready: true, restart count 1 Oct 23 01:28:00.963: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:28:00.963: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 01:28:00.963: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 01:28:00.963: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 01:28:00.963: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 01:28:00.963: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 01:28:00.963: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:28:00.963: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 01:28:00.963: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:28:00.963: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 01:28:00.963: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:28:00.963: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 01:28:00.963: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:28:00.963: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 01:28:00.963: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:28:00.963: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 01:28:00.963: INFO: Container collectd ready: true, restart count 0 Oct 23 01:28:00.963: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:28:00.963: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:28:00.963: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 01:28:00.963: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:28:00.963: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:28:00.963: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 01:28:00.963: INFO: Container tas-extender ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 Oct 23 01:28:01.017: INFO: Pod cmk-kn29k requesting resource cpu=0m on Node node2 Oct 23 01:28:01.017: INFO: Pod cmk-t9r2t requesting resource cpu=0m on Node node1 Oct 23 01:28:01.017: INFO: Pod cmk-webhook-6c9d5f8578-pkwhc requesting resource cpu=0m on Node node2 Oct 23 01:28:01.017: INFO: Pod kube-flannel-2cdvd requesting resource cpu=150m on Node node1 Oct 23 01:28:01.017: INFO: Pod kube-flannel-xx6ls requesting resource cpu=150m on Node node2 Oct 23 01:28:01.017: INFO: Pod kube-multus-ds-amd64-fww5b requesting resource cpu=100m on Node node2 Oct 23 01:28:01.017: INFO: Pod kube-multus-ds-amd64-l97s4 requesting resource cpu=100m on Node node1 Oct 23 01:28:01.017: INFO: Pod kube-proxy-5h2bl requesting resource cpu=0m on Node node2 Oct 23 01:28:01.017: INFO: Pod kube-proxy-m9z8s requesting resource cpu=0m on Node node1 Oct 23 01:28:01.017: INFO: Pod kubernetes-dashboard-785dcbb76d-kc4kh requesting resource cpu=50m on Node node1 Oct 23 01:28:01.017: INFO: Pod kubernetes-metrics-scraper-5558854cb-dfn2n requesting resource cpu=0m on Node node1 Oct 23 01:28:01.017: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 Oct 23 01:28:01.017: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 Oct 23 01:28:01.017: INFO: Pod node-feature-discovery-worker-2pvq5 requesting resource cpu=0m on Node node1 Oct 23 01:28:01.017: INFO: Pod node-feature-discovery-worker-8k8m5 requesting resource cpu=0m on Node node2 Oct 23 01:28:01.017: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd requesting resource cpu=0m on Node node1 Oct 23 01:28:01.017: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq requesting resource cpu=0m on Node node2 Oct 23 01:28:01.017: INFO: Pod collectd-n9sbv requesting resource cpu=0m on Node node1 Oct 23 01:28:01.017: INFO: Pod collectd-xhdgw requesting resource cpu=0m on Node node2 Oct 23 01:28:01.017: INFO: Pod node-exporter-fjc79 requesting resource cpu=112m on Node node2 Oct 23 01:28:01.017: INFO: Pod node-exporter-v656r requesting resource cpu=112m on Node node1 Oct 23 01:28:01.017: INFO: Pod prometheus-k8s-0 requesting resource cpu=200m on Node node1 Oct 23 01:28:01.017: INFO: Pod prometheus-operator-585ccfb458-hwjk2 requesting resource cpu=100m on Node node1 Oct 23 01:28:01.017: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-gltgg requesting resource cpu=0m on Node node2 STEP: Starting Pods to consume most of the cluster CPU. Oct 23 01:28:01.017: INFO: Creating a pod which consumes cpu=53384m on Node node1 Oct 23 01:28:01.029: INFO: Creating a pod which consumes cpu=53629m on Node node2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1ddf85b6-f94c-4c0e-b67b-7f4dd4819536.16b0849e81105b30], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2091/filler-pod-1ddf85b6-f94c-4c0e-b67b-7f4dd4819536 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-1ddf85b6-f94c-4c0e-b67b-7f4dd4819536.16b0849ed919d6b2], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-1ddf85b6-f94c-4c0e-b67b-7f4dd4819536.16b0849ef4844184], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 459.952504ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-1ddf85b6-f94c-4c0e-b67b-7f4dd4819536.16b0849efb9f0929], Reason = [Created], Message = [Created container filler-pod-1ddf85b6-f94c-4c0e-b67b-7f4dd4819536] STEP: Considering event: Type = [Normal], Name = [filler-pod-1ddf85b6-f94c-4c0e-b67b-7f4dd4819536.16b0849f043ff922], Reason = [Started], Message = [Started container filler-pod-1ddf85b6-f94c-4c0e-b67b-7f4dd4819536] STEP: Considering event: Type = [Normal], Name = [filler-pod-bfa38002-dc01-416f-b8ab-84acda820eac.16b0849e808c05b9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2091/filler-pod-bfa38002-dc01-416f-b8ab-84acda820eac to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-bfa38002-dc01-416f-b8ab-84acda820eac.16b0849ed8f3a0c9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-bfa38002-dc01-416f-b8ab-84acda820eac.16b0849eed525953], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 341.743725ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-bfa38002-dc01-416f-b8ab-84acda820eac.16b0849ef47a2d94], Reason = [Created], Message = [Created container filler-pod-bfa38002-dc01-416f-b8ab-84acda820eac] STEP: Considering event: Type = [Normal], Name = [filler-pod-bfa38002-dc01-416f-b8ab-84acda820eac.16b0849efb877b49], Reason = [Started], Message = [Started container filler-pod-bfa38002-dc01-416f-b8ab-84acda820eac] STEP: Considering event: Type = [Warning], Name = [additional-pod.16b0849f7101bbaf], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:28:06.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2091" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.230 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":17,"skipped":5456,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 23 01:28:06.108: INFO: Running AfterSuite actions on all nodes Oct 23 01:28:06.108: INFO: Running AfterSuite actions on node 1 Oct 23 01:28:06.108: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5753,"failed":0} Ran 17 of 5770 Specs in 917.077 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5753 Skipped PASS Ginkgo ran 1 suite in 15m18.459897386s Test Suite Passed