I0610 22:14:05.606960 24 e2e.go:129] Starting e2e run "f2cbe22d-1557-449f-863e-198420f76283" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1654899244 - Will randomize all specs Will run 17 of 5773 specs Jun 10 22:14:05.667: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:14:05.672: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 10 22:14:05.705: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 10 22:14:05.769: INFO: The status of Pod cmk-init-discover-node1-hlbt6 is Succeeded, skipping waiting Jun 10 22:14:05.769: INFO: The status of Pod cmk-init-discover-node2-jxvbr is Succeeded, skipping waiting Jun 10 22:14:05.769: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 10 22:14:05.769: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 10 22:14:05.769: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 10 22:14:05.786: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 10 22:14:05.786: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 10 22:14:05.786: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 10 22:14:05.786: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 10 22:14:05.786: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 10 22:14:05.786: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 10 22:14:05.786: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 10 22:14:05.786: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 10 22:14:05.786: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 10 22:14:05.786: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 10 22:14:05.786: INFO: e2e test version: v1.21.9 Jun 10 22:14:05.787: INFO: kube-apiserver version: v1.21.1 Jun 10 22:14:05.787: INFO: >>> kubeConfig: /root/.kube/config Jun 10 22:14:05.794: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:14:05.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets W0610 22:14:05.828388 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 22:14:05.828: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 22:14:05.832: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:14:05.852: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 10 22:14:05.859: INFO: Number of nodes with available pods: 0 Jun 10 22:14:05.859: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 10 22:14:05.875: INFO: Number of nodes with available pods: 0 Jun 10 22:14:05.875: INFO: Node node2 is running more than one daemon pod Jun 10 22:14:06.880: INFO: Number of nodes with available pods: 0 Jun 10 22:14:06.880: INFO: Node node2 is running more than one daemon pod Jun 10 22:14:07.881: INFO: Number of nodes with available pods: 0 Jun 10 22:14:07.881: INFO: Node node2 is running more than one daemon pod Jun 10 22:14:08.880: INFO: Number of nodes with available pods: 1 Jun 10 22:14:08.880: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 10 22:14:08.896: INFO: Number of nodes with available pods: 1 Jun 10 22:14:08.896: INFO: Number of running nodes: 0, number of available pods: 1 Jun 10 22:14:09.901: INFO: Number of nodes with available pods: 0 Jun 10 22:14:09.901: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 10 22:14:09.913: INFO: Number of nodes with available pods: 0 Jun 10 22:14:09.913: INFO: Node node2 is running more than one daemon pod Jun 10 22:14:10.918: INFO: Number of nodes with available pods: 0 Jun 10 22:14:10.918: INFO: Node node2 is running more than one daemon pod Jun 10 22:14:11.918: INFO: Number of nodes with available pods: 0 Jun 10 22:14:11.918: INFO: Node node2 is running more than one daemon pod Jun 10 22:14:12.942: INFO: Number of nodes with available pods: 0 Jun 10 22:14:12.942: INFO: Node node2 is running more than one daemon pod Jun 10 22:14:13.918: INFO: Number of nodes with available pods: 0 Jun 10 22:14:13.918: INFO: Node node2 is running more than one daemon pod Jun 10 22:14:14.917: INFO: Number of nodes with available pods: 0 Jun 10 22:14:14.917: INFO: Node node2 is running more than one daemon pod Jun 10 22:14:15.917: INFO: Number of nodes with available pods: 0 Jun 10 22:14:15.917: INFO: Node node2 is running more than one daemon pod Jun 10 22:14:16.917: INFO: Number of nodes with available pods: 0 Jun 10 22:14:16.917: INFO: Node node2 is running more than one daemon pod Jun 10 22:14:17.916: INFO: Number of nodes with available pods: 0 Jun 10 22:14:17.916: INFO: Node node2 is running more than one daemon pod Jun 10 22:14:18.916: INFO: Number of nodes with available pods: 0 Jun 10 22:14:18.916: INFO: Node node2 is running more than one daemon pod Jun 10 22:14:19.917: INFO: Number of nodes with available pods: 1 Jun 10 22:14:19.917: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8517, will wait for the garbage collector to delete the pods Jun 10 22:14:19.980: INFO: Deleting DaemonSet.extensions daemon-set took: 5.239771ms Jun 10 22:14:20.080: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.295549ms Jun 10 22:14:23.584: INFO: Number of nodes with available pods: 0 Jun 10 22:14:23.584: INFO: Number of running nodes: 0, number of available pods: 0 Jun 10 22:14:23.589: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"51615"},"items":null} Jun 10 22:14:23.594: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"51615"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:14:23.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8517" for this suite. • [SLOW TEST:17.819 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":1,"skipped":387,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:14:23.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:14:38.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1579" for this suite. STEP: Destroying namespace "nsdeletetest-3756" for this suite. Jun 10 22:14:38.746: INFO: Namespace nsdeletetest-3756 was already deleted STEP: Destroying namespace "nsdeletetest-8454" for this suite. • [SLOW TEST:15.133 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":2,"skipped":609,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:14:38.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 10 22:14:38.788: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 10 22:14:38.797: INFO: Waiting for terminating namespaces to be deleted... Jun 10 22:14:38.799: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 10 22:14:38.809: INFO: cmk-init-discover-node1-hlbt6 from kube-system started at 2022-06-10 20:11:42 +0000 UTC (3 container statuses recorded) Jun 10 22:14:38.809: INFO: Container discover ready: false, restart count 0 Jun 10 22:14:38.809: INFO: Container init ready: false, restart count 0 Jun 10 22:14:38.809: INFO: Container install ready: false, restart count 0 Jun 10 22:14:38.809: INFO: cmk-qjrhs from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 22:14:38.809: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:14:38.809: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:14:38.809: INFO: cmk-webhook-6c9d5f8578-n9w8j from kube-system started at 2022-06-10 20:12:30 +0000 UTC (1 container statuses recorded) Jun 10 22:14:38.809: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 22:14:38.809: INFO: kube-flannel-x926c from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 22:14:38.809: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:14:38.809: INFO: kube-multus-ds-amd64-4gckf from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 22:14:38.809: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:14:38.809: INFO: kube-proxy-5bkrr from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 22:14:38.809: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 22:14:38.809: INFO: nginx-proxy-node1 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 22:14:38.809: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:14:38.809: INFO: node-feature-discovery-worker-9xsdt from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 22:14:38.809: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:14:38.809: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 22:14:38.809: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:14:38.809: INFO: collectd-kpj5z from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 22:14:38.809: INFO: Container collectd ready: true, restart count 0 Jun 10 22:14:38.809: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:14:38.809: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:14:38.809: INFO: node-exporter-tk8f9 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 22:14:38.809: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:14:38.809: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:14:38.809: INFO: prometheus-k8s-0 from monitoring started at 2022-06-10 20:13:45 +0000 UTC (4 container statuses recorded) Jun 10 22:14:38.809: INFO: Container config-reloader ready: true, restart count 0 Jun 10 22:14:38.809: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 22:14:38.809: INFO: Container grafana ready: true, restart count 0 Jun 10 22:14:38.809: INFO: Container prometheus ready: true, restart count 1 Jun 10 22:14:38.809: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn from monitoring started at 2022-06-10 20:16:40 +0000 UTC (1 container statuses recorded) Jun 10 22:14:38.809: INFO: Container tas-extender ready: true, restart count 0 Jun 10 22:14:38.809: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 10 22:14:38.820: INFO: cmk-init-discover-node2-jxvbr from kube-system started at 2022-06-10 20:12:04 +0000 UTC (3 container statuses recorded) Jun 10 22:14:38.820: INFO: Container discover ready: false, restart count 0 Jun 10 22:14:38.820: INFO: Container init ready: false, restart count 0 Jun 10 22:14:38.820: INFO: Container install ready: false, restart count 0 Jun 10 22:14:38.820: INFO: cmk-zpstc from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 22:14:38.820: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:14:38.820: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:14:38.820: INFO: kube-flannel-8jl6m from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 22:14:38.820: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:14:38.820: INFO: kube-multus-ds-amd64-nj866 from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 22:14:38.820: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:14:38.820: INFO: kube-proxy-4clxz from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 22:14:38.820: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 22:14:38.820: INFO: kubernetes-dashboard-785dcbb76d-7pmgn from kube-system started at 2022-06-10 20:01:00 +0000 UTC (1 container statuses recorded) Jun 10 22:14:38.820: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 22:14:38.820: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn from kube-system started at 2022-06-10 20:01:01 +0000 UTC (1 container statuses recorded) Jun 10 22:14:38.820: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 22:14:38.820: INFO: nginx-proxy-node2 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 22:14:38.820: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:14:38.820: INFO: node-feature-discovery-worker-s9mwk from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 22:14:38.820: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:14:38.820: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 22:14:38.820: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:14:38.820: INFO: collectd-srmjh from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 22:14:38.820: INFO: Container collectd ready: true, restart count 0 Jun 10 22:14:38.820: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:14:38.820: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:14:38.820: INFO: node-exporter-trpg7 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 22:14:38.820: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:14:38.820: INFO: Container node-exporter ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6ac1a8ba-3d12-4481-8065-21fe59842a24 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-6ac1a8ba-3d12-4481-8065-21fe59842a24 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-6ac1a8ba-3d12-4481-8065-21fe59842a24 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:14:46.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6185" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.152 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":3,"skipped":926,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:14:46.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:14:46.956: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 10 22:14:46.963: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:46.963: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:46.963: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:46.965: INFO: Number of nodes with available pods: 0 Jun 10 22:14:46.965: INFO: Node node1 is running more than one daemon pod Jun 10 22:14:47.972: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:47.972: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:47.972: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:47.974: INFO: Number of nodes with available pods: 0 Jun 10 22:14:47.974: INFO: Node node1 is running more than one daemon pod Jun 10 22:14:48.971: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:48.971: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:48.971: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:48.974: INFO: Number of nodes with available pods: 0 Jun 10 22:14:48.974: INFO: Node node1 is running more than one daemon pod Jun 10 22:14:49.971: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:49.971: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:49.971: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:49.974: INFO: Number of nodes with available pods: 1 Jun 10 22:14:49.974: INFO: Node node1 is running more than one daemon pod Jun 10 22:14:50.971: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:50.971: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:50.971: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:50.974: INFO: Number of nodes with available pods: 1 Jun 10 22:14:50.974: INFO: Node node1 is running more than one daemon pod Jun 10 22:14:51.973: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:51.973: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:51.973: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:51.978: INFO: Number of nodes with available pods: 1 Jun 10 22:14:51.978: INFO: Node node1 is running more than one daemon pod Jun 10 22:14:52.971: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:52.971: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:52.971: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:52.974: INFO: Number of nodes with available pods: 1 Jun 10 22:14:52.974: INFO: Node node1 is running more than one daemon pod Jun 10 22:14:53.971: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:53.971: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:53.971: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:53.975: INFO: Number of nodes with available pods: 2 Jun 10 22:14:53.975: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 10 22:14:53.997: INFO: Wrong image for pod: daemon-set-qv8zv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 10 22:14:53.997: INFO: Wrong image for pod: daemon-set-v6l8g. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 10 22:14:54.003: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:54.003: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:54.003: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:55.007: INFO: Wrong image for pod: daemon-set-qv8zv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 10 22:14:55.011: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:55.011: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:55.011: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:56.006: INFO: Wrong image for pod: daemon-set-qv8zv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 10 22:14:56.010: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:56.010: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:56.010: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:57.006: INFO: Wrong image for pod: daemon-set-qv8zv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 10 22:14:57.011: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:57.011: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:57.011: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:58.009: INFO: Wrong image for pod: daemon-set-qv8zv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 10 22:14:58.009: INFO: Pod daemon-set-vdgnd is not available Jun 10 22:14:58.013: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:58.013: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:58.013: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:59.009: INFO: Wrong image for pod: daemon-set-qv8zv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 10 22:14:59.009: INFO: Pod daemon-set-vdgnd is not available Jun 10 22:14:59.018: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:59.018: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:14:59.018: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:00.008: INFO: Wrong image for pod: daemon-set-qv8zv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 10 22:15:00.008: INFO: Pod daemon-set-vdgnd is not available Jun 10 22:15:00.012: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:00.012: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:00.012: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:01.012: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:01.012: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:01.012: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:02.013: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:02.013: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:02.013: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:03.013: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:03.013: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:03.013: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:04.012: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:04.012: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:04.012: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:05.012: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:05.012: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:05.012: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:06.011: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:06.011: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:06.011: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:07.016: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:07.016: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:07.016: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:08.009: INFO: Pod daemon-set-5rsxs is not available Jun 10 22:15:08.014: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:08.014: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:08.014: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 10 22:15:08.018: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:08.019: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:08.019: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:08.022: INFO: Number of nodes with available pods: 1 Jun 10 22:15:08.022: INFO: Node node2 is running more than one daemon pod Jun 10 22:15:09.027: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:09.027: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:09.027: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:09.031: INFO: Number of nodes with available pods: 1 Jun 10 22:15:09.031: INFO: Node node2 is running more than one daemon pod Jun 10 22:15:10.030: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:10.030: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:10.030: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:15:10.033: INFO: Number of nodes with available pods: 2 Jun 10 22:15:10.033: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4774, will wait for the garbage collector to delete the pods Jun 10 22:15:10.105: INFO: Deleting DaemonSet.extensions daemon-set took: 5.182925ms Jun 10 22:15:10.205: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.414135ms Jun 10 22:15:16.908: INFO: Number of nodes with available pods: 0 Jun 10 22:15:16.908: INFO: Number of running nodes: 0, number of available pods: 0 Jun 10 22:15:16.911: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"51971"},"items":null} Jun 10 22:15:16.914: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"51971"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:15:16.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4774" for this suite. • [SLOW TEST:30.020 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":4,"skipped":1322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:15:16.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 10 22:15:16.961: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 10 22:15:16.969: INFO: Waiting for terminating namespaces to be deleted... Jun 10 22:15:16.972: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 10 22:15:16.986: INFO: cmk-init-discover-node1-hlbt6 from kube-system started at 2022-06-10 20:11:42 +0000 UTC (3 container statuses recorded) Jun 10 22:15:16.986: INFO: Container discover ready: false, restart count 0 Jun 10 22:15:16.986: INFO: Container init ready: false, restart count 0 Jun 10 22:15:16.986: INFO: Container install ready: false, restart count 0 Jun 10 22:15:16.986: INFO: cmk-qjrhs from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 22:15:16.986: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:15:16.986: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:15:16.986: INFO: cmk-webhook-6c9d5f8578-n9w8j from kube-system started at 2022-06-10 20:12:30 +0000 UTC (1 container statuses recorded) Jun 10 22:15:16.986: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 22:15:16.986: INFO: kube-flannel-x926c from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 22:15:16.986: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:15:16.986: INFO: kube-multus-ds-amd64-4gckf from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 22:15:16.986: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:15:16.986: INFO: kube-proxy-5bkrr from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 22:15:16.986: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 22:15:16.986: INFO: nginx-proxy-node1 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 22:15:16.986: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:15:16.986: INFO: node-feature-discovery-worker-9xsdt from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 22:15:16.986: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:15:16.986: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 22:15:16.986: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:15:16.986: INFO: collectd-kpj5z from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 22:15:16.986: INFO: Container collectd ready: true, restart count 0 Jun 10 22:15:16.986: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:15:16.986: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:15:16.986: INFO: node-exporter-tk8f9 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 22:15:16.986: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:15:16.986: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:15:16.986: INFO: prometheus-k8s-0 from monitoring started at 2022-06-10 20:13:45 +0000 UTC (4 container statuses recorded) Jun 10 22:15:16.986: INFO: Container config-reloader ready: true, restart count 0 Jun 10 22:15:16.986: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 22:15:16.986: INFO: Container grafana ready: true, restart count 0 Jun 10 22:15:16.986: INFO: Container prometheus ready: true, restart count 1 Jun 10 22:15:16.986: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn from monitoring started at 2022-06-10 20:16:40 +0000 UTC (1 container statuses recorded) Jun 10 22:15:16.986: INFO: Container tas-extender ready: true, restart count 0 Jun 10 22:15:16.986: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 10 22:15:17.003: INFO: cmk-init-discover-node2-jxvbr from kube-system started at 2022-06-10 20:12:04 +0000 UTC (3 container statuses recorded) Jun 10 22:15:17.003: INFO: Container discover ready: false, restart count 0 Jun 10 22:15:17.003: INFO: Container init ready: false, restart count 0 Jun 10 22:15:17.003: INFO: Container install ready: false, restart count 0 Jun 10 22:15:17.003: INFO: cmk-zpstc from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 22:15:17.003: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:15:17.003: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:15:17.003: INFO: kube-flannel-8jl6m from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 22:15:17.003: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:15:17.003: INFO: kube-multus-ds-amd64-nj866 from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 22:15:17.003: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:15:17.003: INFO: kube-proxy-4clxz from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 22:15:17.003: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 22:15:17.003: INFO: kubernetes-dashboard-785dcbb76d-7pmgn from kube-system started at 2022-06-10 20:01:00 +0000 UTC (1 container statuses recorded) Jun 10 22:15:17.003: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 22:15:17.003: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn from kube-system started at 2022-06-10 20:01:01 +0000 UTC (1 container statuses recorded) Jun 10 22:15:17.003: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 22:15:17.003: INFO: nginx-proxy-node2 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 22:15:17.003: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:15:17.003: INFO: node-feature-discovery-worker-s9mwk from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 22:15:17.003: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:15:17.003: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 22:15:17.003: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:15:17.003: INFO: collectd-srmjh from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 22:15:17.003: INFO: Container collectd ready: true, restart count 0 Jun 10 22:15:17.003: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:15:17.003: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:15:17.003: INFO: node-exporter-trpg7 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 22:15:17.003: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:15:17.003: INFO: Container node-exporter ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16f762292b3ee6ee], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:15:18.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2821" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":5,"skipped":1393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:15:18.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 10 22:15:18.362: INFO: Pod name wrapped-volume-race-f7b82578-6977-41e7-80a1-432f55fc8953: Found 3 pods out of 5 Jun 10 22:15:23.376: INFO: Pod name wrapped-volume-race-f7b82578-6977-41e7-80a1-432f55fc8953: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f7b82578-6977-41e7-80a1-432f55fc8953 in namespace emptydir-wrapper-1087, will wait for the garbage collector to delete the pods Jun 10 22:15:37.459: INFO: Deleting ReplicationController wrapped-volume-race-f7b82578-6977-41e7-80a1-432f55fc8953 took: 6.527561ms Jun 10 22:15:37.560: INFO: Terminating ReplicationController wrapped-volume-race-f7b82578-6977-41e7-80a1-432f55fc8953 pods took: 101.017664ms STEP: Creating RC which spawns configmap-volume pods Jun 10 22:15:46.977: INFO: Pod name wrapped-volume-race-6466435f-984f-4397-8cb2-b2738086d74b: Found 0 pods out of 5 Jun 10 22:15:51.984: INFO: Pod name wrapped-volume-race-6466435f-984f-4397-8cb2-b2738086d74b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6466435f-984f-4397-8cb2-b2738086d74b in namespace emptydir-wrapper-1087, will wait for the garbage collector to delete the pods Jun 10 22:16:06.065: INFO: Deleting ReplicationController wrapped-volume-race-6466435f-984f-4397-8cb2-b2738086d74b took: 6.086416ms Jun 10 22:16:06.166: INFO: Terminating ReplicationController wrapped-volume-race-6466435f-984f-4397-8cb2-b2738086d74b pods took: 101.077117ms STEP: Creating RC which spawns configmap-volume pods Jun 10 22:16:16.987: INFO: Pod name wrapped-volume-race-08ba23cf-c179-42b5-8829-c65fe80368c2: Found 0 pods out of 5 Jun 10 22:16:21.999: INFO: Pod name wrapped-volume-race-08ba23cf-c179-42b5-8829-c65fe80368c2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-08ba23cf-c179-42b5-8829-c65fe80368c2 in namespace emptydir-wrapper-1087, will wait for the garbage collector to delete the pods Jun 10 22:16:36.083: INFO: Deleting ReplicationController wrapped-volume-race-08ba23cf-c179-42b5-8829-c65fe80368c2 took: 6.407197ms Jun 10 22:16:36.184: INFO: Terminating ReplicationController wrapped-volume-race-08ba23cf-c179-42b5-8829-c65fe80368c2 pods took: 100.856539ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:16:47.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1087" for this suite. • [SLOW TEST:89.328 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":6,"skipped":1611,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:16:47.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 10 22:16:47.419: INFO: Waiting up to 1m0s for all nodes to be ready Jun 10 22:17:47.482: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Jun 10 22:17:47.510: INFO: Created pod: pod0-sched-preemption-low-priority Jun 10 22:17:47.531: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:18:07.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6045" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:80.238 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":7,"skipped":1728,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:18:07.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 10 22:18:07.656: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 10 22:18:07.665: INFO: Waiting for terminating namespaces to be deleted... Jun 10 22:18:07.668: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 10 22:18:07.678: INFO: cmk-init-discover-node1-hlbt6 from kube-system started at 2022-06-10 20:11:42 +0000 UTC (3 container statuses recorded) Jun 10 22:18:07.678: INFO: Container discover ready: false, restart count 0 Jun 10 22:18:07.678: INFO: Container init ready: false, restart count 0 Jun 10 22:18:07.678: INFO: Container install ready: false, restart count 0 Jun 10 22:18:07.678: INFO: cmk-qjrhs from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 22:18:07.678: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:18:07.678: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:18:07.678: INFO: cmk-webhook-6c9d5f8578-n9w8j from kube-system started at 2022-06-10 20:12:30 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.678: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 22:18:07.678: INFO: kube-flannel-x926c from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.678: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:18:07.678: INFO: kube-multus-ds-amd64-4gckf from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.678: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:18:07.678: INFO: kube-proxy-5bkrr from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.678: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 22:18:07.678: INFO: nginx-proxy-node1 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.678: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:18:07.678: INFO: node-feature-discovery-worker-9xsdt from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.678: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:18:07.678: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.678: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:18:07.678: INFO: collectd-kpj5z from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 22:18:07.678: INFO: Container collectd ready: true, restart count 0 Jun 10 22:18:07.678: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:18:07.678: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:18:07.678: INFO: node-exporter-tk8f9 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 22:18:07.678: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:18:07.678: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:18:07.678: INFO: prometheus-k8s-0 from monitoring started at 2022-06-10 20:13:45 +0000 UTC (4 container statuses recorded) Jun 10 22:18:07.678: INFO: Container config-reloader ready: true, restart count 0 Jun 10 22:18:07.678: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 22:18:07.678: INFO: Container grafana ready: true, restart count 0 Jun 10 22:18:07.678: INFO: Container prometheus ready: true, restart count 1 Jun 10 22:18:07.678: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn from monitoring started at 2022-06-10 20:16:40 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.678: INFO: Container tas-extender ready: true, restart count 0 Jun 10 22:18:07.678: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 10 22:18:07.686: INFO: cmk-init-discover-node2-jxvbr from kube-system started at 2022-06-10 20:12:04 +0000 UTC (3 container statuses recorded) Jun 10 22:18:07.686: INFO: Container discover ready: false, restart count 0 Jun 10 22:18:07.687: INFO: Container init ready: false, restart count 0 Jun 10 22:18:07.687: INFO: Container install ready: false, restart count 0 Jun 10 22:18:07.687: INFO: cmk-zpstc from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 22:18:07.687: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:18:07.687: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:18:07.687: INFO: kube-flannel-8jl6m from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.687: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:18:07.687: INFO: kube-multus-ds-amd64-nj866 from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.687: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:18:07.687: INFO: kube-proxy-4clxz from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.687: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 22:18:07.687: INFO: kubernetes-dashboard-785dcbb76d-7pmgn from kube-system started at 2022-06-10 20:01:00 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.687: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 22:18:07.687: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn from kube-system started at 2022-06-10 20:01:01 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.687: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 22:18:07.687: INFO: nginx-proxy-node2 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.687: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:18:07.687: INFO: node-feature-discovery-worker-s9mwk from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.687: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:18:07.687: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.687: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:18:07.687: INFO: collectd-srmjh from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 22:18:07.687: INFO: Container collectd ready: true, restart count 0 Jun 10 22:18:07.687: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:18:07.687: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:18:07.687: INFO: node-exporter-trpg7 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 22:18:07.687: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:18:07.687: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:18:07.687: INFO: pod1-sched-preemption-medium-priority from sched-preemption-6045 started at 2022-06-10 22:17:51 +0000 UTC (1 container statuses recorded) Jun 10 22:18:07.687: INFO: Container pod1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-03db4cbb-5790-40f0-abee-48e39137bf69 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.10.190.208 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-03db4cbb-5790-40f0-abee-48e39137bf69 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-03db4cbb-5790-40f0-abee-48e39137bf69 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:23:15.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5310" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.174 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":8,"skipped":2335,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:23:15.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:23:15.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9422" for this suite. STEP: Destroying namespace "nspatchtest-bc9bc429-bbcb-4ede-b6e0-cd4498ff42fa-4085" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":9,"skipped":2752,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:23:15.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:23:15.919: INFO: Create a RollingUpdate DaemonSet Jun 10 22:23:15.923: INFO: Check that daemon pods launch on every node of the cluster Jun 10 22:23:15.927: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:15.927: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:15.927: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:15.930: INFO: Number of nodes with available pods: 0 Jun 10 22:23:15.930: INFO: Node node1 is running more than one daemon pod Jun 10 22:23:16.936: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:16.936: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:16.936: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:16.939: INFO: Number of nodes with available pods: 0 Jun 10 22:23:16.939: INFO: Node node1 is running more than one daemon pod Jun 10 22:23:17.936: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:17.937: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:17.937: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:17.939: INFO: Number of nodes with available pods: 0 Jun 10 22:23:17.939: INFO: Node node1 is running more than one daemon pod Jun 10 22:23:18.934: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:18.934: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:18.934: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:18.937: INFO: Number of nodes with available pods: 1 Jun 10 22:23:18.937: INFO: Node node2 is running more than one daemon pod Jun 10 22:23:19.937: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:19.937: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:19.937: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:19.940: INFO: Number of nodes with available pods: 2 Jun 10 22:23:19.940: INFO: Number of running nodes: 2, number of available pods: 2 Jun 10 22:23:19.940: INFO: Update the DaemonSet to trigger a rollout Jun 10 22:23:19.948: INFO: Updating DaemonSet daemon-set Jun 10 22:23:26.964: INFO: Roll back the DaemonSet before rollout is complete Jun 10 22:23:26.971: INFO: Updating DaemonSet daemon-set Jun 10 22:23:26.971: INFO: Make sure DaemonSet rollback is complete Jun 10 22:23:26.973: INFO: Wrong image for pod: daemon-set-q7b7m. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Jun 10 22:23:26.973: INFO: Pod daemon-set-q7b7m is not available Jun 10 22:23:26.978: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:26.978: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:26.978: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:27.989: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:27.989: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:27.989: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:28.987: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:28.987: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:28.987: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:29.988: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:29.988: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:29.988: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:30.987: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:30.987: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:30.987: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:31.988: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:31.988: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:31.988: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:32.990: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:32.990: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:32.990: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:33.986: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:33.986: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:33.986: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:34.986: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:34.987: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:34.987: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:35.989: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:35.989: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:35.989: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:36.984: INFO: Pod daemon-set-n5dr5 is not available Jun 10 22:23:36.988: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:36.988: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:23:36.989: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5797, will wait for the garbage collector to delete the pods Jun 10 22:23:37.054: INFO: Deleting DaemonSet.extensions daemon-set took: 4.61248ms Jun 10 22:23:37.155: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.861692ms Jun 10 22:23:40.358: INFO: Number of nodes with available pods: 0 Jun 10 22:23:40.358: INFO: Number of running nodes: 0, number of available pods: 0 Jun 10 22:23:40.361: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"54394"},"items":null} Jun 10 22:23:40.364: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"54394"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:23:40.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5797" for this suite. • [SLOW TEST:24.512 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":10,"skipped":2950,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:23:40.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 10 22:23:40.427: INFO: Waiting up to 1m0s for all nodes to be ready Jun 10 22:24:40.492: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:24:40.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:24:40.530: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Jun 10 22:24:40.533: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:24:40.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-125" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:24:40.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7466" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.231 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":11,"skipped":3306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:24:40.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 10 22:24:40.669: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:40.669: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:40.669: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:40.671: INFO: Number of nodes with available pods: 0 Jun 10 22:24:40.671: INFO: Node node1 is running more than one daemon pod Jun 10 22:24:41.678: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:41.678: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:41.678: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:41.681: INFO: Number of nodes with available pods: 0 Jun 10 22:24:41.681: INFO: Node node1 is running more than one daemon pod Jun 10 22:24:42.678: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:42.678: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:42.678: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:42.681: INFO: Number of nodes with available pods: 0 Jun 10 22:24:42.681: INFO: Node node1 is running more than one daemon pod Jun 10 22:24:43.678: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:43.678: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:43.678: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:43.681: INFO: Number of nodes with available pods: 1 Jun 10 22:24:43.681: INFO: Node node2 is running more than one daemon pod Jun 10 22:24:44.678: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:44.678: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:44.678: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:44.682: INFO: Number of nodes with available pods: 2 Jun 10 22:24:44.682: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 10 22:24:44.698: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:44.698: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:44.698: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:44.702: INFO: Number of nodes with available pods: 1 Jun 10 22:24:44.702: INFO: Node node2 is running more than one daemon pod Jun 10 22:24:45.708: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:45.708: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:45.708: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:45.711: INFO: Number of nodes with available pods: 1 Jun 10 22:24:45.711: INFO: Node node2 is running more than one daemon pod Jun 10 22:24:46.710: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:46.710: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:46.710: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:46.712: INFO: Number of nodes with available pods: 1 Jun 10 22:24:46.713: INFO: Node node2 is running more than one daemon pod Jun 10 22:24:47.707: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:47.707: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:47.707: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:47.710: INFO: Number of nodes with available pods: 1 Jun 10 22:24:47.710: INFO: Node node2 is running more than one daemon pod Jun 10 22:24:48.711: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:48.711: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:48.711: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:24:48.715: INFO: Number of nodes with available pods: 2 Jun 10 22:24:48.715: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1835, will wait for the garbage collector to delete the pods Jun 10 22:24:48.780: INFO: Deleting DaemonSet.extensions daemon-set took: 6.084037ms Jun 10 22:24:48.880: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.306205ms Jun 10 22:24:57.084: INFO: Number of nodes with available pods: 0 Jun 10 22:24:57.084: INFO: Number of running nodes: 0, number of available pods: 0 Jun 10 22:24:57.086: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"54724"},"items":null} Jun 10 22:24:57.089: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"54724"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:24:57.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1835" for this suite. • [SLOW TEST:16.483 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":12,"skipped":3486,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:24:57.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 10 22:24:57.147: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 10 22:24:57.155: INFO: Waiting for terminating namespaces to be deleted... Jun 10 22:24:57.157: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 10 22:24:57.168: INFO: cmk-init-discover-node1-hlbt6 from kube-system started at 2022-06-10 20:11:42 +0000 UTC (3 container statuses recorded) Jun 10 22:24:57.168: INFO: Container discover ready: false, restart count 0 Jun 10 22:24:57.168: INFO: Container init ready: false, restart count 0 Jun 10 22:24:57.168: INFO: Container install ready: false, restart count 0 Jun 10 22:24:57.168: INFO: cmk-qjrhs from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 22:24:57.168: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:24:57.168: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:24:57.168: INFO: cmk-webhook-6c9d5f8578-n9w8j from kube-system started at 2022-06-10 20:12:30 +0000 UTC (1 container statuses recorded) Jun 10 22:24:57.168: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 22:24:57.168: INFO: kube-flannel-x926c from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 22:24:57.168: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:24:57.168: INFO: kube-multus-ds-amd64-4gckf from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 22:24:57.168: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:24:57.168: INFO: kube-proxy-5bkrr from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 22:24:57.168: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 22:24:57.168: INFO: nginx-proxy-node1 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 22:24:57.168: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:24:57.168: INFO: node-feature-discovery-worker-9xsdt from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 22:24:57.168: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:24:57.168: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 22:24:57.168: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:24:57.168: INFO: collectd-kpj5z from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 22:24:57.168: INFO: Container collectd ready: true, restart count 0 Jun 10 22:24:57.168: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:24:57.168: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:24:57.168: INFO: node-exporter-tk8f9 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 22:24:57.168: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:24:57.168: INFO: Container node-exporter ready: true, restart count 0 Jun 10 22:24:57.168: INFO: prometheus-k8s-0 from monitoring started at 2022-06-10 20:13:45 +0000 UTC (4 container statuses recorded) Jun 10 22:24:57.168: INFO: Container config-reloader ready: true, restart count 0 Jun 10 22:24:57.168: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 22:24:57.168: INFO: Container grafana ready: true, restart count 0 Jun 10 22:24:57.168: INFO: Container prometheus ready: true, restart count 1 Jun 10 22:24:57.168: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn from monitoring started at 2022-06-10 20:16:40 +0000 UTC (1 container statuses recorded) Jun 10 22:24:57.168: INFO: Container tas-extender ready: true, restart count 0 Jun 10 22:24:57.168: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 10 22:24:57.185: INFO: cmk-init-discover-node2-jxvbr from kube-system started at 2022-06-10 20:12:04 +0000 UTC (3 container statuses recorded) Jun 10 22:24:57.185: INFO: Container discover ready: false, restart count 0 Jun 10 22:24:57.185: INFO: Container init ready: false, restart count 0 Jun 10 22:24:57.185: INFO: Container install ready: false, restart count 0 Jun 10 22:24:57.185: INFO: cmk-zpstc from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 22:24:57.185: INFO: Container nodereport ready: true, restart count 0 Jun 10 22:24:57.185: INFO: Container reconcile ready: true, restart count 0 Jun 10 22:24:57.185: INFO: kube-flannel-8jl6m from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 22:24:57.185: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 22:24:57.185: INFO: kube-multus-ds-amd64-nj866 from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 22:24:57.185: INFO: Container kube-multus ready: true, restart count 1 Jun 10 22:24:57.185: INFO: kube-proxy-4clxz from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 22:24:57.185: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 22:24:57.185: INFO: kubernetes-dashboard-785dcbb76d-7pmgn from kube-system started at 2022-06-10 20:01:00 +0000 UTC (1 container statuses recorded) Jun 10 22:24:57.185: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 22:24:57.185: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn from kube-system started at 2022-06-10 20:01:01 +0000 UTC (1 container statuses recorded) Jun 10 22:24:57.185: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 22:24:57.185: INFO: nginx-proxy-node2 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 22:24:57.185: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 22:24:57.185: INFO: node-feature-discovery-worker-s9mwk from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 22:24:57.185: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 22:24:57.185: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 22:24:57.185: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 22:24:57.185: INFO: collectd-srmjh from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 22:24:57.185: INFO: Container collectd ready: true, restart count 0 Jun 10 22:24:57.185: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 22:24:57.185: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 22:24:57.185: INFO: node-exporter-trpg7 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 22:24:57.185: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 22:24:57.185: INFO: Container node-exporter ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 Jun 10 22:24:57.247: INFO: Pod cmk-qjrhs requesting resource cpu=0m on Node node1 Jun 10 22:24:57.247: INFO: Pod cmk-webhook-6c9d5f8578-n9w8j requesting resource cpu=0m on Node node1 Jun 10 22:24:57.247: INFO: Pod cmk-zpstc requesting resource cpu=0m on Node node2 Jun 10 22:24:57.247: INFO: Pod kube-flannel-8jl6m requesting resource cpu=150m on Node node2 Jun 10 22:24:57.247: INFO: Pod kube-flannel-x926c requesting resource cpu=150m on Node node1 Jun 10 22:24:57.247: INFO: Pod kube-multus-ds-amd64-4gckf requesting resource cpu=100m on Node node1 Jun 10 22:24:57.247: INFO: Pod kube-multus-ds-amd64-nj866 requesting resource cpu=100m on Node node2 Jun 10 22:24:57.247: INFO: Pod kube-proxy-4clxz requesting resource cpu=0m on Node node2 Jun 10 22:24:57.247: INFO: Pod kube-proxy-5bkrr requesting resource cpu=0m on Node node1 Jun 10 22:24:57.247: INFO: Pod kubernetes-dashboard-785dcbb76d-7pmgn requesting resource cpu=50m on Node node2 Jun 10 22:24:57.247: INFO: Pod kubernetes-metrics-scraper-5558854cb-pf6tn requesting resource cpu=0m on Node node2 Jun 10 22:24:57.247: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 Jun 10 22:24:57.247: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 Jun 10 22:24:57.247: INFO: Pod node-feature-discovery-worker-9xsdt requesting resource cpu=0m on Node node1 Jun 10 22:24:57.247: INFO: Pod node-feature-discovery-worker-s9mwk requesting resource cpu=0m on Node node2 Jun 10 22:24:57.247: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v requesting resource cpu=0m on Node node1 Jun 10 22:24:57.247: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 requesting resource cpu=0m on Node node2 Jun 10 22:24:57.247: INFO: Pod collectd-kpj5z requesting resource cpu=0m on Node node1 Jun 10 22:24:57.247: INFO: Pod collectd-srmjh requesting resource cpu=0m on Node node2 Jun 10 22:24:57.247: INFO: Pod node-exporter-tk8f9 requesting resource cpu=112m on Node node1 Jun 10 22:24:57.247: INFO: Pod node-exporter-trpg7 requesting resource cpu=112m on Node node2 Jun 10 22:24:57.247: INFO: Pod prometheus-k8s-0 requesting resource cpu=200m on Node node1 Jun 10 22:24:57.247: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-lb2mn requesting resource cpu=0m on Node node1 STEP: Starting Pods to consume most of the cluster CPU. Jun 10 22:24:57.247: INFO: Creating a pod which consumes cpu=53594m on Node node2 Jun 10 22:24:57.258: INFO: Creating a pod which consumes cpu=53489m on Node node1 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-c2ece800-ea00-4564-b215-3e78ad7035bb.16f762b043adac27], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4925/filler-pod-c2ece800-ea00-4564-b215-3e78ad7035bb to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-c2ece800-ea00-4564-b215-3e78ad7035bb.16f762b0a369621b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-c2ece800-ea00-4564-b215-3e78ad7035bb.16f762b0b59523ca], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 304.845845ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-c2ece800-ea00-4564-b215-3e78ad7035bb.16f762b0bc2fdbb0], Reason = [Created], Message = [Created container filler-pod-c2ece800-ea00-4564-b215-3e78ad7035bb] STEP: Considering event: Type = [Normal], Name = [filler-pod-c2ece800-ea00-4564-b215-3e78ad7035bb.16f762b0c3b8de8e], Reason = [Started], Message = [Started container filler-pod-c2ece800-ea00-4564-b215-3e78ad7035bb] STEP: Considering event: Type = [Normal], Name = [filler-pod-e03174de-3f75-4fdd-8673-18b1d2c052e5.16f762b0432c0049], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4925/filler-pod-e03174de-3f75-4fdd-8673-18b1d2c052e5 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-e03174de-3f75-4fdd-8673-18b1d2c052e5.16f762b0bdf79e57], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-e03174de-3f75-4fdd-8673-18b1d2c052e5.16f762b0d05c2bdb], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 308.573449ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-e03174de-3f75-4fdd-8673-18b1d2c052e5.16f762b0d6450976], Reason = [Created], Message = [Created container filler-pod-e03174de-3f75-4fdd-8673-18b1d2c052e5] STEP: Considering event: Type = [Normal], Name = [filler-pod-e03174de-3f75-4fdd-8673-18b1d2c052e5.16f762b0dd4ab9b6], Reason = [Started], Message = [Started container filler-pod-e03174de-3f75-4fdd-8673-18b1d2c052e5] STEP: Considering event: Type = [Warning], Name = [additional-pod.16f762b133a5bc7e], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:25:02.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4925" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.231 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":13,"skipped":4511,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:25:02.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:25:08.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6756" for this suite. STEP: Destroying namespace "nsdeletetest-6691" for this suite. Jun 10 22:25:08.439: INFO: Namespace nsdeletetest-6691 was already deleted STEP: Destroying namespace "nsdeletetest-5659" for this suite. • [SLOW TEST:6.090 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":14,"skipped":4545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:25:08.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 10 22:25:08.495: INFO: Waiting up to 1m0s for all nodes to be ready Jun 10 22:26:08.552: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Jun 10 22:26:08.577: INFO: Created pod: pod0-sched-preemption-low-priority Jun 10 22:26:08.596: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:26:40.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5077" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:92.229 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":15,"skipped":5093,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:26:40.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 10 22:26:40.734: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:40.734: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:40.734: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:40.736: INFO: Number of nodes with available pods: 0 Jun 10 22:26:40.736: INFO: Node node1 is running more than one daemon pod Jun 10 22:26:41.742: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:41.742: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:41.742: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:41.745: INFO: Number of nodes with available pods: 0 Jun 10 22:26:41.745: INFO: Node node1 is running more than one daemon pod Jun 10 22:26:42.743: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:42.743: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:42.743: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:42.750: INFO: Number of nodes with available pods: 0 Jun 10 22:26:42.750: INFO: Node node1 is running more than one daemon pod Jun 10 22:26:43.745: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:43.745: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:43.745: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:43.749: INFO: Number of nodes with available pods: 2 Jun 10 22:26:43.749: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 10 22:26:43.763: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:43.763: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:43.763: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:43.765: INFO: Number of nodes with available pods: 1 Jun 10 22:26:43.766: INFO: Node node1 is running more than one daemon pod Jun 10 22:26:44.771: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:44.771: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:44.771: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:44.774: INFO: Number of nodes with available pods: 1 Jun 10 22:26:44.774: INFO: Node node1 is running more than one daemon pod Jun 10 22:26:45.772: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:45.772: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:45.772: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:45.775: INFO: Number of nodes with available pods: 1 Jun 10 22:26:45.775: INFO: Node node1 is running more than one daemon pod Jun 10 22:26:46.771: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:46.771: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:46.771: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:46.774: INFO: Number of nodes with available pods: 1 Jun 10 22:26:46.774: INFO: Node node1 is running more than one daemon pod Jun 10 22:26:47.774: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:47.774: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:47.774: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:47.777: INFO: Number of nodes with available pods: 1 Jun 10 22:26:47.777: INFO: Node node1 is running more than one daemon pod Jun 10 22:26:48.772: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:48.772: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:48.772: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:48.775: INFO: Number of nodes with available pods: 1 Jun 10 22:26:48.775: INFO: Node node1 is running more than one daemon pod Jun 10 22:26:49.771: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:49.771: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:49.771: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:49.774: INFO: Number of nodes with available pods: 1 Jun 10 22:26:49.774: INFO: Node node1 is running more than one daemon pod Jun 10 22:26:50.774: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:50.774: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:50.774: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 10 22:26:50.777: INFO: Number of nodes with available pods: 2 Jun 10 22:26:50.777: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3396, will wait for the garbage collector to delete the pods Jun 10 22:26:50.839: INFO: Deleting DaemonSet.extensions daemon-set took: 6.025841ms Jun 10 22:26:50.940: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.029784ms Jun 10 22:26:56.943: INFO: Number of nodes with available pods: 0 Jun 10 22:26:56.943: INFO: Number of running nodes: 0, number of available pods: 0 Jun 10 22:26:56.946: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"55308"},"items":null} Jun 10 22:26:56.949: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"55308"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:26:56.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3396" for this suite. • [SLOW TEST:16.286 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":16,"skipped":5290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:26:56.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 10 22:26:57.010: INFO: Waiting up to 1m0s for all nodes to be ready Jun 10 22:27:57.073: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 22:27:57.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Jun 10 22:28:01.131: INFO: found a healthy node: node2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 10 22:28:17.194: INFO: pods created so far: [1 1 1] Jun 10 22:28:17.194: INFO: length of pods created so far: 3 Jun 10 22:28:33.208: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:28:40.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-1408" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 22:28:40.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3582" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:103.317 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":17,"skipped":5623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJun 10 22:28:40.294: INFO: Running AfterSuite actions on all nodes Jun 10 22:28:40.294: INFO: Running AfterSuite actions on node 1 Jun 10 22:28:40.294: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5756,"failed":0} Ran 17 of 5773 Specs in 874.632 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5756 Skipped PASS Ginkgo ran 1 suite in 14m36.056930783s Test Suite Passed