I0422 22:11:15.384053 23 e2e.go:129] Starting e2e run "52bb2b99-3825-44c4-9378-0a598f337ec8" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1650665474 - Will randomize all specs Will run 17 of 5773 specs Apr 22 22:11:15.443: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:11:15.448: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 22 22:11:15.476: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 22 22:11:15.543: INFO: The status of Pod cmk-init-discover-node1-7s78z is Succeeded, skipping waiting Apr 22 22:11:15.543: INFO: The status of Pod cmk-init-discover-node2-2m4dr is Succeeded, skipping waiting Apr 22 22:11:15.543: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 22 22:11:15.543: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 22 22:11:15.543: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 22 22:11:15.559: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Apr 22 22:11:15.559: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Apr 22 22:11:15.559: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Apr 22 22:11:15.559: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Apr 22 22:11:15.559: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Apr 22 22:11:15.559: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Apr 22 22:11:15.559: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Apr 22 22:11:15.559: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 22 22:11:15.559: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Apr 22 22:11:15.559: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Apr 22 22:11:15.559: INFO: e2e test version: v1.21.9 Apr 22 22:11:15.561: INFO: kube-apiserver version: v1.21.1 Apr 22 22:11:15.561: INFO: >>> kubeConfig: /root/.kube/config Apr 22 22:11:15.566: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:11:15.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption W0422 22:11:15.589122 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 22:11:15.589: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 22:11:15.592: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Apr 22 22:11:15.608: INFO: Waiting up to 1m0s for all nodes to be ready Apr 22 22:12:15.668: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:12:15.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Apr 22 22:12:19.732: INFO: found a healthy node: node2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:12:35.839: INFO: pods created so far: [1 1 1] Apr 22 22:12:35.839: INFO: length of pods created so far: 3 Apr 22 22:12:53.856: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:13:00.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7822" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:13:00.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4676" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:105.364 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":1,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:13:00.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:13:00.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5097" for this suite. STEP: Destroying namespace "nspatchtest-e80d6234-714d-4890-ad48-d1d90acfa2db-8004" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":2,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:13:01.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:13:01.040: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 22 22:13:01.047: INFO: Number of nodes with available pods: 0 Apr 22 22:13:01.047: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 22 22:13:01.064: INFO: Number of nodes with available pods: 0 Apr 22 22:13:01.064: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:02.067: INFO: Number of nodes with available pods: 0 Apr 22 22:13:02.067: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:03.068: INFO: Number of nodes with available pods: 0 Apr 22 22:13:03.068: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:04.067: INFO: Number of nodes with available pods: 1 Apr 22 22:13:04.067: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 22 22:13:04.081: INFO: Number of nodes with available pods: 1 Apr 22 22:13:04.081: INFO: Number of running nodes: 0, number of available pods: 1 Apr 22 22:13:05.089: INFO: Number of nodes with available pods: 0 Apr 22 22:13:05.089: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 22 22:13:05.100: INFO: Number of nodes with available pods: 0 Apr 22 22:13:05.100: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:06.104: INFO: Number of nodes with available pods: 0 Apr 22 22:13:06.104: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:07.105: INFO: Number of nodes with available pods: 0 Apr 22 22:13:07.105: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:08.104: INFO: Number of nodes with available pods: 0 Apr 22 22:13:08.104: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:09.103: INFO: Number of nodes with available pods: 0 Apr 22 22:13:09.103: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:10.106: INFO: Number of nodes with available pods: 0 Apr 22 22:13:10.106: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:11.104: INFO: Number of nodes with available pods: 0 Apr 22 22:13:11.104: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:12.104: INFO: Number of nodes with available pods: 0 Apr 22 22:13:12.104: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:13.107: INFO: Number of nodes with available pods: 0 Apr 22 22:13:13.107: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:14.104: INFO: Number of nodes with available pods: 0 Apr 22 22:13:14.104: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:15.107: INFO: Number of nodes with available pods: 0 Apr 22 22:13:15.107: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:16.104: INFO: Number of nodes with available pods: 0 Apr 22 22:13:16.104: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:17.106: INFO: Number of nodes with available pods: 0 Apr 22 22:13:17.106: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:18.106: INFO: Number of nodes with available pods: 0 Apr 22 22:13:18.106: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:19.104: INFO: Number of nodes with available pods: 0 Apr 22 22:13:19.104: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:20.104: INFO: Number of nodes with available pods: 0 Apr 22 22:13:20.104: INFO: Node node2 is running more than one daemon pod Apr 22 22:13:21.105: INFO: Number of nodes with available pods: 1 Apr 22 22:13:21.105: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3583, will wait for the garbage collector to delete the pods Apr 22 22:13:21.168: INFO: Deleting DaemonSet.extensions daemon-set took: 4.975636ms Apr 22 22:13:21.269: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.695602ms Apr 22 22:13:24.473: INFO: Number of nodes with available pods: 0 Apr 22 22:13:24.473: INFO: Number of running nodes: 0, number of available pods: 0 Apr 22 22:13:24.480: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"51844"},"items":null} Apr 22 22:13:24.483: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"51844"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:13:24.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3583" for this suite. • [SLOW TEST:23.502 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":3,"skipped":241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:13:24.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 22 22:13:24.546: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 22:13:24.555: INFO: Waiting for terminating namespaces to be deleted... Apr 22 22:13:24.557: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 22 22:13:24.566: INFO: cmk-2vd7z from kube-system started at 2022-04-22 20:12:29 +0000 UTC (2 container statuses recorded) Apr 22 22:13:24.566: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:13:24.567: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:13:24.567: INFO: cmk-init-discover-node1-7s78z from kube-system started at 2022-04-22 20:11:46 +0000 UTC (3 container statuses recorded) Apr 22 22:13:24.567: INFO: Container discover ready: false, restart count 0 Apr 22 22:13:24.567: INFO: Container init ready: false, restart count 0 Apr 22 22:13:24.567: INFO: Container install ready: false, restart count 0 Apr 22 22:13:24.567: INFO: kube-flannel-l4rjs from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 22:13:24.567: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 22:13:24.567: INFO: kube-multus-ds-amd64-x8jqs from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 22:13:24.567: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:13:24.567: INFO: kube-proxy-v8fdh from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 22:13:24.567: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:13:24.567: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 22:13:24.567: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 22:13:24.567: INFO: nginx-proxy-node1 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 22:13:24.567: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 22:13:24.567: INFO: node-feature-discovery-worker-2hkr5 from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 22:13:24.567: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:13:24.567: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 22:13:24.567: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:13:24.567: INFO: collectd-g2c8k from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 22:13:24.567: INFO: Container collectd ready: true, restart count 0 Apr 22 22:13:24.567: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:13:24.567: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:13:24.567: INFO: node-exporter-9zzfv from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 22:13:24.567: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:13:24.567: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:13:24.567: INFO: prometheus-k8s-0 from monitoring started at 2022-04-22 20:13:52 +0000 UTC (4 container statuses recorded) Apr 22 22:13:24.567: INFO: Container config-reloader ready: true, restart count 0 Apr 22 22:13:24.567: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 22:13:24.567: INFO: Container grafana ready: true, restart count 0 Apr 22 22:13:24.567: INFO: Container prometheus ready: true, restart count 1 Apr 22 22:13:24.567: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g from monitoring started at 2022-04-22 20:16:40 +0000 UTC (1 container statuses recorded) Apr 22 22:13:24.567: INFO: Container tas-extender ready: true, restart count 0 Apr 22 22:13:24.567: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 22 22:13:24.576: INFO: cmk-init-discover-node2-2m4dr from kube-system started at 2022-04-22 20:12:06 +0000 UTC (3 container statuses recorded) Apr 22 22:13:24.576: INFO: Container discover ready: false, restart count 0 Apr 22 22:13:24.576: INFO: Container init ready: false, restart count 0 Apr 22 22:13:24.576: INFO: Container install ready: false, restart count 0 Apr 22 22:13:24.576: INFO: cmk-vdkxb from kube-system started at 2022-04-22 20:12:30 +0000 UTC (2 container statuses recorded) Apr 22 22:13:24.576: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:13:24.576: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:13:24.576: INFO: cmk-webhook-6c9d5f8578-nmxns from kube-system started at 2022-04-22 20:12:30 +0000 UTC (1 container statuses recorded) Apr 22 22:13:24.576: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 22:13:24.576: INFO: kube-flannel-2kskh from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 22:13:24.576: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 22:13:24.576: INFO: kube-multus-ds-amd64-kjrqq from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 22:13:24.576: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:13:24.576: INFO: kube-proxy-jvkvz from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 22:13:24.576: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:13:24.576: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 22:13:24.576: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 22:13:24.576: INFO: nginx-proxy-node2 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 22:13:24.576: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 22:13:24.576: INFO: node-feature-discovery-worker-bktph from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 22:13:24.576: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:13:24.576: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 22:13:24.576: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:13:24.576: INFO: collectd-ptpbz from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 22:13:24.576: INFO: Container collectd ready: true, restart count 0 Apr 22 22:13:24.576: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:13:24.576: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:13:24.576: INFO: node-exporter-c4bhs from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 22:13:24.576: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:13:24.576: INFO: Container node-exporter ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-00080cda-0118-4216-b74d-6ed253a25689 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.10.190.208 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-00080cda-0118-4216-b74d-6ed253a25689 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-00080cda-0118-4216-b74d-6ed253a25689 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:18:34.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8537" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:310.171 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":4,"skipped":653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:18:34.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 22 22:18:34.714: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 22:18:34.722: INFO: Waiting for terminating namespaces to be deleted... Apr 22 22:18:34.724: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 22 22:18:34.732: INFO: cmk-2vd7z from kube-system started at 2022-04-22 20:12:29 +0000 UTC (2 container statuses recorded) Apr 22 22:18:34.732: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:18:34.732: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:18:34.732: INFO: cmk-init-discover-node1-7s78z from kube-system started at 2022-04-22 20:11:46 +0000 UTC (3 container statuses recorded) Apr 22 22:18:34.733: INFO: Container discover ready: false, restart count 0 Apr 22 22:18:34.733: INFO: Container init ready: false, restart count 0 Apr 22 22:18:34.733: INFO: Container install ready: false, restart count 0 Apr 22 22:18:34.733: INFO: kube-flannel-l4rjs from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.733: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 22:18:34.733: INFO: kube-multus-ds-amd64-x8jqs from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.733: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:18:34.733: INFO: kube-proxy-v8fdh from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.733: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:18:34.733: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.733: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 22:18:34.733: INFO: nginx-proxy-node1 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.733: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 22:18:34.733: INFO: node-feature-discovery-worker-2hkr5 from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.733: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:18:34.733: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.733: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:18:34.733: INFO: collectd-g2c8k from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 22:18:34.733: INFO: Container collectd ready: true, restart count 0 Apr 22 22:18:34.733: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:18:34.733: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:18:34.733: INFO: node-exporter-9zzfv from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 22:18:34.733: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:18:34.733: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:18:34.733: INFO: prometheus-k8s-0 from monitoring started at 2022-04-22 20:13:52 +0000 UTC (4 container statuses recorded) Apr 22 22:18:34.733: INFO: Container config-reloader ready: true, restart count 0 Apr 22 22:18:34.733: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 22:18:34.733: INFO: Container grafana ready: true, restart count 0 Apr 22 22:18:34.733: INFO: Container prometheus ready: true, restart count 1 Apr 22 22:18:34.733: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g from monitoring started at 2022-04-22 20:16:40 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.733: INFO: Container tas-extender ready: true, restart count 0 Apr 22 22:18:34.733: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 22 22:18:34.743: INFO: cmk-init-discover-node2-2m4dr from kube-system started at 2022-04-22 20:12:06 +0000 UTC (3 container statuses recorded) Apr 22 22:18:34.743: INFO: Container discover ready: false, restart count 0 Apr 22 22:18:34.743: INFO: Container init ready: false, restart count 0 Apr 22 22:18:34.743: INFO: Container install ready: false, restart count 0 Apr 22 22:18:34.743: INFO: cmk-vdkxb from kube-system started at 2022-04-22 20:12:30 +0000 UTC (2 container statuses recorded) Apr 22 22:18:34.743: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:18:34.743: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:18:34.743: INFO: cmk-webhook-6c9d5f8578-nmxns from kube-system started at 2022-04-22 20:12:30 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.743: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 22:18:34.743: INFO: kube-flannel-2kskh from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.743: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 22:18:34.743: INFO: kube-multus-ds-amd64-kjrqq from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.743: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:18:34.743: INFO: kube-proxy-jvkvz from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.743: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:18:34.743: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.743: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 22:18:34.743: INFO: nginx-proxy-node2 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.743: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 22:18:34.743: INFO: node-feature-discovery-worker-bktph from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.743: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:18:34.743: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.743: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:18:34.743: INFO: collectd-ptpbz from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 22:18:34.743: INFO: Container collectd ready: true, restart count 0 Apr 22 22:18:34.743: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:18:34.743: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:18:34.743: INFO: node-exporter-c4bhs from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 22:18:34.743: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:18:34.743: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:18:34.743: INFO: pod4 from sched-pred-8537 started at 2022-04-22 22:13:29 +0000 UTC (1 container statuses recorded) Apr 22 22:18:34.743: INFO: Container agnhost ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a9004902-7daf-4a96-b4af-c2a51fc7cd4d 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-a9004902-7daf-4a96-b4af-c2a51fc7cd4d off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-a9004902-7daf-4a96-b4af-c2a51fc7cd4d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:18:42.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1623" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.142 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":5,"skipped":922,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:18:42.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 22 22:18:43.141: INFO: Pod name wrapped-volume-race-32e775af-dfcc-4372-9225-8c854ed33289: Found 2 pods out of 5 Apr 22 22:18:48.154: INFO: Pod name wrapped-volume-race-32e775af-dfcc-4372-9225-8c854ed33289: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-32e775af-dfcc-4372-9225-8c854ed33289 in namespace emptydir-wrapper-7976, will wait for the garbage collector to delete the pods Apr 22 22:19:02.235: INFO: Deleting ReplicationController wrapped-volume-race-32e775af-dfcc-4372-9225-8c854ed33289 took: 5.247981ms Apr 22 22:19:02.336: INFO: Terminating ReplicationController wrapped-volume-race-32e775af-dfcc-4372-9225-8c854ed33289 pods took: 100.7607ms STEP: Creating RC which spawns configmap-volume pods Apr 22 22:19:18.054: INFO: Pod name wrapped-volume-race-8c7abf39-6816-4379-8fdd-73c0b047f914: Found 0 pods out of 5 Apr 22 22:19:23.068: INFO: Pod name wrapped-volume-race-8c7abf39-6816-4379-8fdd-73c0b047f914: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8c7abf39-6816-4379-8fdd-73c0b047f914 in namespace emptydir-wrapper-7976, will wait for the garbage collector to delete the pods Apr 22 22:19:49.150: INFO: Deleting ReplicationController wrapped-volume-race-8c7abf39-6816-4379-8fdd-73c0b047f914 took: 5.661717ms Apr 22 22:19:49.251: INFO: Terminating ReplicationController wrapped-volume-race-8c7abf39-6816-4379-8fdd-73c0b047f914 pods took: 100.187778ms STEP: Creating RC which spawns configmap-volume pods Apr 22 22:19:58.071: INFO: Pod name wrapped-volume-race-d4b72195-288b-40c8-96f1-11dc639a4b7c: Found 0 pods out of 5 Apr 22 22:20:03.083: INFO: Pod name wrapped-volume-race-d4b72195-288b-40c8-96f1-11dc639a4b7c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d4b72195-288b-40c8-96f1-11dc639a4b7c in namespace emptydir-wrapper-7976, will wait for the garbage collector to delete the pods Apr 22 22:20:17.168: INFO: Deleting ReplicationController wrapped-volume-race-d4b72195-288b-40c8-96f1-11dc639a4b7c took: 5.311158ms Apr 22 22:20:17.269: INFO: Terminating ReplicationController wrapped-volume-race-d4b72195-288b-40c8-96f1-11dc639a4b7c pods took: 101.116815ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:20:28.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7976" for this suite. • [SLOW TEST:105.319 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":6,"skipped":969,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:20:28.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 22 22:20:28.205: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:28.205: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:28.205: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:28.208: INFO: Number of nodes with available pods: 0 Apr 22 22:20:28.208: INFO: Node node1 is running more than one daemon pod Apr 22 22:20:29.214: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:29.214: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:29.214: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:29.216: INFO: Number of nodes with available pods: 0 Apr 22 22:20:29.216: INFO: Node node1 is running more than one daemon pod Apr 22 22:20:30.218: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:30.218: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:30.218: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:30.221: INFO: Number of nodes with available pods: 0 Apr 22 22:20:30.221: INFO: Node node1 is running more than one daemon pod Apr 22 22:20:31.215: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:31.215: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:31.215: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:31.218: INFO: Number of nodes with available pods: 0 Apr 22 22:20:31.218: INFO: Node node1 is running more than one daemon pod Apr 22 22:20:32.214: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:32.215: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:32.215: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:32.218: INFO: Number of nodes with available pods: 0 Apr 22 22:20:32.218: INFO: Node node1 is running more than one daemon pod Apr 22 22:20:33.213: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:33.213: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:33.214: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:33.216: INFO: Number of nodes with available pods: 2 Apr 22 22:20:33.216: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 22 22:20:33.233: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:33.233: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:33.233: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:33.235: INFO: Number of nodes with available pods: 1 Apr 22 22:20:33.235: INFO: Node node2 is running more than one daemon pod Apr 22 22:20:34.240: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:34.240: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:34.240: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:34.243: INFO: Number of nodes with available pods: 1 Apr 22 22:20:34.243: INFO: Node node2 is running more than one daemon pod Apr 22 22:20:35.241: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:35.241: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:35.241: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:35.243: INFO: Number of nodes with available pods: 1 Apr 22 22:20:35.243: INFO: Node node2 is running more than one daemon pod Apr 22 22:20:36.241: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:36.241: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:36.241: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:36.244: INFO: Number of nodes with available pods: 1 Apr 22 22:20:36.244: INFO: Node node2 is running more than one daemon pod Apr 22 22:20:37.241: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:37.241: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:37.241: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:37.244: INFO: Number of nodes with available pods: 1 Apr 22 22:20:37.244: INFO: Node node2 is running more than one daemon pod Apr 22 22:20:38.243: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:38.243: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:38.243: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:38.245: INFO: Number of nodes with available pods: 1 Apr 22 22:20:38.245: INFO: Node node2 is running more than one daemon pod Apr 22 22:20:39.241: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:39.241: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:39.241: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:39.244: INFO: Number of nodes with available pods: 2 Apr 22 22:20:39.244: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4585, will wait for the garbage collector to delete the pods Apr 22 22:20:39.305: INFO: Deleting DaemonSet.extensions daemon-set took: 5.478875ms Apr 22 22:20:39.407: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.056039ms Apr 22 22:20:47.910: INFO: Number of nodes with available pods: 0 Apr 22 22:20:47.910: INFO: Number of running nodes: 0, number of available pods: 0 Apr 22 22:20:47.912: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"54014"},"items":null} Apr 22 22:20:47.914: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"54014"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:20:47.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4585" for this suite. • [SLOW TEST:19.777 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":7,"skipped":998,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:20:47.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:20:47.972: INFO: Create a RollingUpdate DaemonSet Apr 22 22:20:47.976: INFO: Check that daemon pods launch on every node of the cluster Apr 22 22:20:47.980: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:47.980: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:47.980: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:47.983: INFO: Number of nodes with available pods: 0 Apr 22 22:20:47.983: INFO: Node node1 is running more than one daemon pod Apr 22 22:20:48.987: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:48.987: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:48.987: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:48.991: INFO: Number of nodes with available pods: 0 Apr 22 22:20:48.991: INFO: Node node1 is running more than one daemon pod Apr 22 22:20:49.989: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:49.989: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:49.989: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:49.992: INFO: Number of nodes with available pods: 0 Apr 22 22:20:49.992: INFO: Node node1 is running more than one daemon pod Apr 22 22:20:50.989: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:50.989: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:50.989: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:20:50.991: INFO: Number of nodes with available pods: 2 Apr 22 22:20:50.991: INFO: Number of running nodes: 2, number of available pods: 2 Apr 22 22:20:50.991: INFO: Update the DaemonSet to trigger a rollout Apr 22 22:20:50.997: INFO: Updating DaemonSet daemon-set Apr 22 22:21:08.013: INFO: Roll back the DaemonSet before rollout is complete Apr 22 22:21:08.019: INFO: Updating DaemonSet daemon-set Apr 22 22:21:08.019: INFO: Make sure DaemonSet rollback is complete Apr 22 22:21:08.022: INFO: Wrong image for pod: daemon-set-8qdb9. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Apr 22 22:21:08.022: INFO: Pod daemon-set-8qdb9 is not available Apr 22 22:21:08.026: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:08.026: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:08.027: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:09.038: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:09.038: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:09.038: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:10.037: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:10.037: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:10.037: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:11.034: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:11.034: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:11.034: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:12.039: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:12.039: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:12.039: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:13.037: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:13.037: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:13.037: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:14.035: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:14.035: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:14.035: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:15.035: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:15.035: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:15.035: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:16.035: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:16.035: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:16.035: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:17.034: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:17.034: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:17.034: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:18.033: INFO: Pod daemon-set-mfqn8 is not available Apr 22 22:21:18.037: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:18.037: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:21:18.038: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2103, will wait for the garbage collector to delete the pods Apr 22 22:21:18.101: INFO: Deleting DaemonSet.extensions daemon-set took: 5.462686ms Apr 22 22:21:18.202: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.875673ms Apr 22 22:21:27.906: INFO: Number of nodes with available pods: 0 Apr 22 22:21:27.906: INFO: Number of running nodes: 0, number of available pods: 0 Apr 22 22:21:27.909: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"54215"},"items":null} Apr 22 22:21:27.911: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"54215"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:21:27.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2103" for this suite. • [SLOW TEST:39.999 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":8,"skipped":1065,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:21:27.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Apr 22 22:21:27.974: INFO: Waiting up to 1m0s for all nodes to be ready Apr 22 22:22:28.039: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Apr 22 22:22:28.067: INFO: Created pod: pod0-sched-preemption-low-priority Apr 22 22:22:28.087: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:22:50.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8895" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:82.228 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":9,"skipped":1799,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:22:50.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:22:56.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8036" for this suite. STEP: Destroying namespace "nsdeletetest-3633" for this suite. Apr 22 22:22:56.297: INFO: Namespace nsdeletetest-3633 was already deleted STEP: Destroying namespace "nsdeletetest-8236" for this suite. • [SLOW TEST:6.131 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":10,"skipped":1856,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:22:56.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:23:11.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3292" for this suite. STEP: Destroying namespace "nsdeletetest-8522" for this suite. Apr 22 22:23:11.420: INFO: Namespace nsdeletetest-8522 was already deleted STEP: Destroying namespace "nsdeletetest-6153" for this suite. • [SLOW TEST:15.105 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":11,"skipped":3081,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:23:11.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Apr 22 22:23:11.466: INFO: Waiting up to 1m0s for all nodes to be ready Apr 22 22:24:11.519: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:24:11.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:24:11.553: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Apr 22 22:24:11.556: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:24:11.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-6781" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:24:11.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6962" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.192 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":12,"skipped":4422,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:24:11.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 22 22:24:11.651: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 22:24:11.659: INFO: Waiting for terminating namespaces to be deleted... Apr 22 22:24:11.661: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 22 22:24:11.672: INFO: cmk-2vd7z from kube-system started at 2022-04-22 20:12:29 +0000 UTC (2 container statuses recorded) Apr 22 22:24:11.672: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:24:11.673: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:24:11.673: INFO: cmk-init-discover-node1-7s78z from kube-system started at 2022-04-22 20:11:46 +0000 UTC (3 container statuses recorded) Apr 22 22:24:11.673: INFO: Container discover ready: false, restart count 0 Apr 22 22:24:11.673: INFO: Container init ready: false, restart count 0 Apr 22 22:24:11.673: INFO: Container install ready: false, restart count 0 Apr 22 22:24:11.673: INFO: kube-flannel-l4rjs from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 22:24:11.673: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 22:24:11.673: INFO: kube-multus-ds-amd64-x8jqs from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 22:24:11.673: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:24:11.673: INFO: kube-proxy-v8fdh from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 22:24:11.673: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:24:11.673: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 22:24:11.673: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 22:24:11.673: INFO: nginx-proxy-node1 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 22:24:11.673: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 22:24:11.673: INFO: node-feature-discovery-worker-2hkr5 from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 22:24:11.673: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:24:11.673: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 22:24:11.673: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:24:11.673: INFO: collectd-g2c8k from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 22:24:11.673: INFO: Container collectd ready: true, restart count 0 Apr 22 22:24:11.673: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:24:11.673: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:24:11.673: INFO: node-exporter-9zzfv from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 22:24:11.673: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:24:11.673: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:24:11.673: INFO: prometheus-k8s-0 from monitoring started at 2022-04-22 20:13:52 +0000 UTC (4 container statuses recorded) Apr 22 22:24:11.673: INFO: Container config-reloader ready: true, restart count 0 Apr 22 22:24:11.673: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 22:24:11.673: INFO: Container grafana ready: true, restart count 0 Apr 22 22:24:11.673: INFO: Container prometheus ready: true, restart count 1 Apr 22 22:24:11.673: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g from monitoring started at 2022-04-22 20:16:40 +0000 UTC (1 container statuses recorded) Apr 22 22:24:11.673: INFO: Container tas-extender ready: true, restart count 0 Apr 22 22:24:11.673: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 22 22:24:11.682: INFO: cmk-init-discover-node2-2m4dr from kube-system started at 2022-04-22 20:12:06 +0000 UTC (3 container statuses recorded) Apr 22 22:24:11.682: INFO: Container discover ready: false, restart count 0 Apr 22 22:24:11.682: INFO: Container init ready: false, restart count 0 Apr 22 22:24:11.682: INFO: Container install ready: false, restart count 0 Apr 22 22:24:11.682: INFO: cmk-vdkxb from kube-system started at 2022-04-22 20:12:30 +0000 UTC (2 container statuses recorded) Apr 22 22:24:11.682: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:24:11.682: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:24:11.682: INFO: cmk-webhook-6c9d5f8578-nmxns from kube-system started at 2022-04-22 20:12:30 +0000 UTC (1 container statuses recorded) Apr 22 22:24:11.682: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 22:24:11.682: INFO: kube-flannel-2kskh from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 22:24:11.682: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 22:24:11.682: INFO: kube-multus-ds-amd64-kjrqq from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 22:24:11.682: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:24:11.682: INFO: kube-proxy-jvkvz from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 22:24:11.682: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:24:11.682: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 22:24:11.682: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 22:24:11.682: INFO: nginx-proxy-node2 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 22:24:11.682: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 22:24:11.682: INFO: node-feature-discovery-worker-bktph from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 22:24:11.682: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:24:11.682: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 22:24:11.682: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:24:11.682: INFO: collectd-ptpbz from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 22:24:11.682: INFO: Container collectd ready: true, restart count 0 Apr 22 22:24:11.682: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:24:11.682: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:24:11.682: INFO: node-exporter-c4bhs from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 22:24:11.682: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:24:11.682: INFO: Container node-exporter ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 Apr 22 22:24:11.735: INFO: Pod cmk-2vd7z requesting resource cpu=0m on Node node1 Apr 22 22:24:11.735: INFO: Pod cmk-vdkxb requesting resource cpu=0m on Node node2 Apr 22 22:24:11.735: INFO: Pod cmk-webhook-6c9d5f8578-nmxns requesting resource cpu=0m on Node node2 Apr 22 22:24:11.735: INFO: Pod kube-flannel-2kskh requesting resource cpu=150m on Node node2 Apr 22 22:24:11.735: INFO: Pod kube-flannel-l4rjs requesting resource cpu=150m on Node node1 Apr 22 22:24:11.735: INFO: Pod kube-multus-ds-amd64-kjrqq requesting resource cpu=100m on Node node2 Apr 22 22:24:11.735: INFO: Pod kube-multus-ds-amd64-x8jqs requesting resource cpu=100m on Node node1 Apr 22 22:24:11.735: INFO: Pod kube-proxy-jvkvz requesting resource cpu=0m on Node node2 Apr 22 22:24:11.735: INFO: Pod kube-proxy-v8fdh requesting resource cpu=0m on Node node1 Apr 22 22:24:11.735: INFO: Pod kubernetes-dashboard-785dcbb76d-bxmz8 requesting resource cpu=50m on Node node2 Apr 22 22:24:11.735: INFO: Pod kubernetes-metrics-scraper-5558854cb-kdpvp requesting resource cpu=0m on Node node1 Apr 22 22:24:11.735: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 Apr 22 22:24:11.735: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 Apr 22 22:24:11.735: INFO: Pod node-feature-discovery-worker-2hkr5 requesting resource cpu=0m on Node node1 Apr 22 22:24:11.735: INFO: Pod node-feature-discovery-worker-bktph requesting resource cpu=0m on Node node2 Apr 22 22:24:11.735: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh requesting resource cpu=0m on Node node1 Apr 22 22:24:11.735: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd requesting resource cpu=0m on Node node2 Apr 22 22:24:11.735: INFO: Pod collectd-g2c8k requesting resource cpu=0m on Node node1 Apr 22 22:24:11.735: INFO: Pod collectd-ptpbz requesting resource cpu=0m on Node node2 Apr 22 22:24:11.735: INFO: Pod node-exporter-9zzfv requesting resource cpu=112m on Node node1 Apr 22 22:24:11.735: INFO: Pod node-exporter-c4bhs requesting resource cpu=112m on Node node2 Apr 22 22:24:11.735: INFO: Pod prometheus-k8s-0 requesting resource cpu=200m on Node node1 Apr 22 22:24:11.735: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-8ns7g requesting resource cpu=0m on Node node1 STEP: Starting Pods to consume most of the cluster CPU. Apr 22 22:24:11.735: INFO: Creating a pod which consumes cpu=53489m on Node node1 Apr 22 22:24:11.746: INFO: Creating a pod which consumes cpu=53594m on Node node2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-3daab838-3e30-41f1-ab67-01908570837b.16e85835d9e793c6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3598/filler-pod-3daab838-3e30-41f1-ab67-01908570837b to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-3daab838-3e30-41f1-ab67-01908570837b.16e8583632dc7c61], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-3daab838-3e30-41f1-ab67-01908570837b.16e8583649ed9620], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 386.988957ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-3daab838-3e30-41f1-ab67-01908570837b.16e8583651d624b6], Reason = [Created], Message = [Created container filler-pod-3daab838-3e30-41f1-ab67-01908570837b] STEP: Considering event: Type = [Normal], Name = [filler-pod-3daab838-3e30-41f1-ab67-01908570837b.16e8583658d6a623], Reason = [Started], Message = [Started container filler-pod-3daab838-3e30-41f1-ab67-01908570837b] STEP: Considering event: Type = [Normal], Name = [filler-pod-ffe9bfae-7032-4b2b-847c-eedb4ce7030f.16e85835da6dc69f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3598/filler-pod-ffe9bfae-7032-4b2b-847c-eedb4ce7030f to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-ffe9bfae-7032-4b2b-847c-eedb4ce7030f.16e858363b2fa9f8], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-ffe9bfae-7032-4b2b-847c-eedb4ce7030f.16e85836516729e5], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 372.728505ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-ffe9bfae-7032-4b2b-847c-eedb4ce7030f.16e8583658d866f1], Reason = [Created], Message = [Created container filler-pod-ffe9bfae-7032-4b2b-847c-eedb4ce7030f] STEP: Considering event: Type = [Normal], Name = [filler-pod-ffe9bfae-7032-4b2b-847c-eedb4ce7030f.16e8583660acf56b], Reason = [Started], Message = [Started container filler-pod-ffe9bfae-7032-4b2b-847c-eedb4ce7030f] STEP: Considering event: Type = [Warning], Name = [additional-pod.16e85836ca1ea7b2], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:24:16.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3598" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.186 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":13,"skipped":4426,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:24:16.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Apr 22 22:24:16.850: INFO: Waiting up to 1m0s for all nodes to be ready Apr 22 22:25:16.904: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Apr 22 22:25:16.930: INFO: Created pod: pod0-sched-preemption-low-priority Apr 22 22:25:16.950: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:25:37.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9075" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:80.227 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":14,"skipped":4709,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:25:37.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 22 22:25:37.098: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 22 22:25:37.106: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:37.106: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:37.106: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:37.108: INFO: Number of nodes with available pods: 0 Apr 22 22:25:37.108: INFO: Node node1 is running more than one daemon pod Apr 22 22:25:38.114: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:38.114: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:38.114: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:38.117: INFO: Number of nodes with available pods: 0 Apr 22 22:25:38.117: INFO: Node node1 is running more than one daemon pod Apr 22 22:25:39.113: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:39.114: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:39.114: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:39.116: INFO: Number of nodes with available pods: 0 Apr 22 22:25:39.116: INFO: Node node1 is running more than one daemon pod Apr 22 22:25:40.118: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:40.118: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:40.118: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:40.125: INFO: Number of nodes with available pods: 1 Apr 22 22:25:40.125: INFO: Node node2 is running more than one daemon pod Apr 22 22:25:41.114: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:41.115: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:41.115: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:41.118: INFO: Number of nodes with available pods: 2 Apr 22 22:25:41.118: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 22 22:25:41.139: INFO: Wrong image for pod: daemon-set-6w92d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Apr 22 22:25:41.139: INFO: Wrong image for pod: daemon-set-gkvfl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Apr 22 22:25:41.144: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:41.144: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:41.144: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:42.148: INFO: Wrong image for pod: daemon-set-gkvfl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Apr 22 22:25:42.152: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:42.152: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:42.152: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:43.150: INFO: Wrong image for pod: daemon-set-gkvfl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Apr 22 22:25:43.154: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:43.154: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:43.154: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:44.149: INFO: Pod daemon-set-5c8dc is not available Apr 22 22:25:44.149: INFO: Wrong image for pod: daemon-set-gkvfl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Apr 22 22:25:44.154: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:44.154: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:44.154: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:45.151: INFO: Pod daemon-set-5c8dc is not available Apr 22 22:25:45.151: INFO: Wrong image for pod: daemon-set-gkvfl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Apr 22 22:25:45.156: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:45.156: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:45.156: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:46.151: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:46.151: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:46.151: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:47.154: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:47.154: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:47.155: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:48.157: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:48.157: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:48.157: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:49.152: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:49.152: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:49.152: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:50.154: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:50.154: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:50.154: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:51.154: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:51.154: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:51.154: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:52.155: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:52.155: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:52.155: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:53.156: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:53.156: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:53.156: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:54.153: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:54.153: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:54.153: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:55.155: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:55.155: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:55.155: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:56.151: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:56.151: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:56.151: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:57.154: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:57.154: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:57.154: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:58.151: INFO: Pod daemon-set-6kx46 is not available Apr 22 22:25:58.154: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:58.154: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:58.154: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 22 22:25:58.159: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:58.159: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:58.159: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:58.161: INFO: Number of nodes with available pods: 1 Apr 22 22:25:58.161: INFO: Node node2 is running more than one daemon pod Apr 22 22:25:59.167: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:59.167: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:59.167: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:25:59.170: INFO: Number of nodes with available pods: 1 Apr 22 22:25:59.170: INFO: Node node2 is running more than one daemon pod Apr 22 22:26:00.167: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:00.168: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:00.168: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:00.170: INFO: Number of nodes with available pods: 1 Apr 22 22:26:00.170: INFO: Node node2 is running more than one daemon pod Apr 22 22:26:01.166: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:01.166: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:01.166: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:01.169: INFO: Number of nodes with available pods: 1 Apr 22 22:26:01.169: INFO: Node node2 is running more than one daemon pod Apr 22 22:26:02.168: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:02.168: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:02.168: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:02.172: INFO: Number of nodes with available pods: 2 Apr 22 22:26:02.172: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2551, will wait for the garbage collector to delete the pods Apr 22 22:26:02.242: INFO: Deleting DaemonSet.extensions daemon-set took: 5.301222ms Apr 22 22:26:02.342: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.383165ms Apr 22 22:26:07.946: INFO: Number of nodes with available pods: 0 Apr 22 22:26:07.946: INFO: Number of running nodes: 0, number of available pods: 0 Apr 22 22:26:07.949: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"55414"},"items":null} Apr 22 22:26:07.951: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"55414"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:26:07.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2551" for this suite. • [SLOW TEST:30.912 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":15,"skipped":5221,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:26:07.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 22 22:26:08.036: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:08.036: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:08.036: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:08.043: INFO: Number of nodes with available pods: 0 Apr 22 22:26:08.043: INFO: Node node1 is running more than one daemon pod Apr 22 22:26:09.049: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:09.049: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:09.049: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:09.053: INFO: Number of nodes with available pods: 0 Apr 22 22:26:09.053: INFO: Node node1 is running more than one daemon pod Apr 22 22:26:10.050: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:10.050: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:10.050: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:10.053: INFO: Number of nodes with available pods: 0 Apr 22 22:26:10.053: INFO: Node node1 is running more than one daemon pod Apr 22 22:26:11.048: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:11.048: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:11.048: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:11.051: INFO: Number of nodes with available pods: 2 Apr 22 22:26:11.051: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 22 22:26:11.068: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:11.068: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:11.068: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:11.071: INFO: Number of nodes with available pods: 1 Apr 22 22:26:11.071: INFO: Node node2 is running more than one daemon pod Apr 22 22:26:12.080: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:12.080: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:12.080: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:12.082: INFO: Number of nodes with available pods: 1 Apr 22 22:26:12.083: INFO: Node node2 is running more than one daemon pod Apr 22 22:26:13.078: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:13.078: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:13.078: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:13.081: INFO: Number of nodes with available pods: 1 Apr 22 22:26:13.081: INFO: Node node2 is running more than one daemon pod Apr 22 22:26:14.076: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:14.076: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:14.076: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 22 22:26:14.079: INFO: Number of nodes with available pods: 2 Apr 22 22:26:14.079: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3670, will wait for the garbage collector to delete the pods Apr 22 22:26:14.143: INFO: Deleting DaemonSet.extensions daemon-set took: 5.340437ms Apr 22 22:26:14.244: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.045685ms Apr 22 22:26:27.948: INFO: Number of nodes with available pods: 0 Apr 22 22:26:27.948: INFO: Number of running nodes: 0, number of available pods: 0 Apr 22 22:26:27.951: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"55562"},"items":null} Apr 22 22:26:27.953: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"55562"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:26:27.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3670" for this suite. • [SLOW TEST:19.997 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":16,"skipped":5489,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 22:26:27.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 22 22:26:27.998: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 22:26:28.005: INFO: Waiting for terminating namespaces to be deleted... Apr 22 22:26:28.008: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 22 22:26:28.024: INFO: cmk-2vd7z from kube-system started at 2022-04-22 20:12:29 +0000 UTC (2 container statuses recorded) Apr 22 22:26:28.024: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:26:28.024: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:26:28.024: INFO: cmk-init-discover-node1-7s78z from kube-system started at 2022-04-22 20:11:46 +0000 UTC (3 container statuses recorded) Apr 22 22:26:28.024: INFO: Container discover ready: false, restart count 0 Apr 22 22:26:28.024: INFO: Container init ready: false, restart count 0 Apr 22 22:26:28.024: INFO: Container install ready: false, restart count 0 Apr 22 22:26:28.024: INFO: kube-flannel-l4rjs from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 22:26:28.024: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 22:26:28.024: INFO: kube-multus-ds-amd64-x8jqs from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 22:26:28.024: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:26:28.024: INFO: kube-proxy-v8fdh from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 22:26:28.024: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:26:28.024: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 22:26:28.024: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 22:26:28.024: INFO: nginx-proxy-node1 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 22:26:28.024: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 22:26:28.024: INFO: node-feature-discovery-worker-2hkr5 from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 22:26:28.024: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:26:28.024: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 22:26:28.024: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:26:28.024: INFO: collectd-g2c8k from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 22:26:28.024: INFO: Container collectd ready: true, restart count 0 Apr 22 22:26:28.024: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:26:28.024: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:26:28.024: INFO: node-exporter-9zzfv from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 22:26:28.024: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:26:28.024: INFO: Container node-exporter ready: true, restart count 0 Apr 22 22:26:28.024: INFO: prometheus-k8s-0 from monitoring started at 2022-04-22 20:13:52 +0000 UTC (4 container statuses recorded) Apr 22 22:26:28.024: INFO: Container config-reloader ready: true, restart count 0 Apr 22 22:26:28.024: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 22:26:28.024: INFO: Container grafana ready: true, restart count 0 Apr 22 22:26:28.024: INFO: Container prometheus ready: true, restart count 1 Apr 22 22:26:28.024: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g from monitoring started at 2022-04-22 20:16:40 +0000 UTC (1 container statuses recorded) Apr 22 22:26:28.024: INFO: Container tas-extender ready: true, restart count 0 Apr 22 22:26:28.024: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 22 22:26:28.044: INFO: cmk-init-discover-node2-2m4dr from kube-system started at 2022-04-22 20:12:06 +0000 UTC (3 container statuses recorded) Apr 22 22:26:28.044: INFO: Container discover ready: false, restart count 0 Apr 22 22:26:28.044: INFO: Container init ready: false, restart count 0 Apr 22 22:26:28.044: INFO: Container install ready: false, restart count 0 Apr 22 22:26:28.044: INFO: cmk-vdkxb from kube-system started at 2022-04-22 20:12:30 +0000 UTC (2 container statuses recorded) Apr 22 22:26:28.044: INFO: Container nodereport ready: true, restart count 0 Apr 22 22:26:28.044: INFO: Container reconcile ready: true, restart count 0 Apr 22 22:26:28.044: INFO: cmk-webhook-6c9d5f8578-nmxns from kube-system started at 2022-04-22 20:12:30 +0000 UTC (1 container statuses recorded) Apr 22 22:26:28.044: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 22:26:28.044: INFO: kube-flannel-2kskh from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 22:26:28.044: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 22:26:28.044: INFO: kube-multus-ds-amd64-kjrqq from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 22:26:28.044: INFO: Container kube-multus ready: true, restart count 1 Apr 22 22:26:28.044: INFO: kube-proxy-jvkvz from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 22:26:28.044: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 22:26:28.044: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 22:26:28.044: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 22:26:28.044: INFO: nginx-proxy-node2 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 22:26:28.044: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 22:26:28.044: INFO: node-feature-discovery-worker-bktph from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 22:26:28.044: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 22:26:28.044: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 22:26:28.044: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 22:26:28.044: INFO: collectd-ptpbz from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 22:26:28.044: INFO: Container collectd ready: true, restart count 0 Apr 22 22:26:28.044: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 22:26:28.044: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 22:26:28.044: INFO: node-exporter-c4bhs from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 22:26:28.044: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 22:26:28.044: INFO: Container node-exporter ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16e85855977a4adf], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 22:26:29.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1183" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":17,"skipped":5569,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 22 22:26:29.094: INFO: Running AfterSuite actions on all nodes Apr 22 22:26:29.094: INFO: Running AfterSuite actions on node 1 Apr 22 22:26:29.094: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5756,"failed":0} Ran 17 of 5773 Specs in 913.655 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5756 Skipped PASS Ginkgo ran 1 suite in 15m15.127020473s Test Suite Passed