I1113 01:26:34.017278 22 e2e.go:129] Starting e2e run "7cfc6a4c-9f0e-42bd-b951-186800a58870" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1636766792 - Will randomize all specs Will run 17 of 5770 specs Nov 13 01:26:34.078: INFO: >>> kubeConfig: /root/.kube/config Nov 13 01:26:34.083: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 13 01:26:34.111: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 01:26:34.181: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 01:26:34.181: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 01:26:34.181: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 01:26:34.181: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 01:26:34.181: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 13 01:26:34.201: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 13 01:26:34.201: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 13 01:26:34.201: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 13 01:26:34.201: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 13 01:26:34.201: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 13 01:26:34.201: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 13 01:26:34.201: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 13 01:26:34.201: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 13 01:26:34.201: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 13 01:26:34.201: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 13 01:26:34.201: INFO: e2e test version: v1.21.5 Nov 13 01:26:34.202: INFO: kube-apiserver version: v1.21.1 Nov 13 01:26:34.202: INFO: >>> kubeConfig: /root/.kube/config Nov 13 01:26:34.208: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:26:34.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces W1113 01:26:34.250668 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 01:26:34.250: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 01:26:34.254: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:26:34.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7747" for this suite. STEP: Destroying namespace "nspatchtest-0cbd15db-27e8-498e-a4ad-0ad6bf0685b5-3649" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":1,"skipped":1828,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:26:34.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 01:26:34.313: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 01:26:34.322: INFO: Waiting for terminating namespaces to be deleted... Nov 13 01:26:34.324: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 01:26:34.332: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 01:26:34.332: INFO: Container nodereport ready: true, restart count 0 Nov 13 01:26:34.332: INFO: Container reconcile ready: true, restart count 0 Nov 13 01:26:34.332: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 01:26:34.332: INFO: Container discover ready: false, restart count 0 Nov 13 01:26:34.332: INFO: Container init ready: false, restart count 0 Nov 13 01:26:34.332: INFO: Container install ready: false, restart count 0 Nov 13 01:26:34.332: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 01:26:34.332: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 01:26:34.332: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 01:26:34.332: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 01:26:34.332: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 01:26:34.332: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:26:34.332: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 01:26:34.332: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 01:26:34.332: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 01:26:34.332: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 01:26:34.332: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 01:26:34.332: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 01:26:34.332: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 01:26:34.332: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 01:26:34.332: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 01:26:34.332: INFO: Container collectd ready: true, restart count 0 Nov 13 01:26:34.332: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 01:26:34.332: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 01:26:34.332: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 01:26:34.332: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:26:34.332: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:26:34.332: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 01:26:34.332: INFO: Container config-reloader ready: true, restart count 0 Nov 13 01:26:34.332: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 01:26:34.332: INFO: Container grafana ready: true, restart count 0 Nov 13 01:26:34.332: INFO: Container prometheus ready: true, restart count 1 Nov 13 01:26:34.332: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 01:26:34.332: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:26:34.332: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 01:26:34.332: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 01:26:34.342: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 01:26:34.342: INFO: Container discover ready: false, restart count 0 Nov 13 01:26:34.342: INFO: Container init ready: false, restart count 0 Nov 13 01:26:34.342: INFO: Container install ready: false, restart count 0 Nov 13 01:26:34.342: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 01:26:34.342: INFO: Container nodereport ready: true, restart count 0 Nov 13 01:26:34.342: INFO: Container reconcile ready: true, restart count 0 Nov 13 01:26:34.342: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 01:26:34.342: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 01:26:34.342: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 01:26:34.342: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:26:34.342: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 01:26:34.342: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 01:26:34.342: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 01:26:34.342: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 01:26:34.342: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 01:26:34.342: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 01:26:34.342: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 01:26:34.342: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 01:26:34.342: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 01:26:34.342: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 01:26:34.342: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 01:26:34.342: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 01:26:34.342: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 01:26:34.342: INFO: Container collectd ready: true, restart count 0 Nov 13 01:26:34.342: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 01:26:34.342: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 01:26:34.342: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 01:26:34.342: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:26:34.342: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:26:34.342: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 01:26:34.342: INFO: Container tas-extender ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 Nov 13 01:26:34.400: INFO: Pod cmk-4tcdw requesting resource cpu=0m on Node node1 Nov 13 01:26:34.400: INFO: Pod cmk-qhvr7 requesting resource cpu=0m on Node node2 Nov 13 01:26:34.400: INFO: Pod cmk-webhook-6c9d5f8578-2gp25 requesting resource cpu=0m on Node node1 Nov 13 01:26:34.400: INFO: Pod kube-flannel-mg66r requesting resource cpu=150m on Node node2 Nov 13 01:26:34.400: INFO: Pod kube-flannel-r7bbp requesting resource cpu=150m on Node node1 Nov 13 01:26:34.400: INFO: Pod kube-multus-ds-amd64-2wqj5 requesting resource cpu=100m on Node node2 Nov 13 01:26:34.400: INFO: Pod kube-multus-ds-amd64-4wqsv requesting resource cpu=100m on Node node1 Nov 13 01:26:34.400: INFO: Pod kube-proxy-p6kbl requesting resource cpu=0m on Node node1 Nov 13 01:26:34.400: INFO: Pod kube-proxy-pzhf2 requesting resource cpu=0m on Node node2 Nov 13 01:26:34.400: INFO: Pod kubernetes-dashboard-785dcbb76d-w2mls requesting resource cpu=50m on Node node2 Nov 13 01:26:34.400: INFO: Pod kubernetes-metrics-scraper-5558854cb-jmbpk requesting resource cpu=0m on Node node2 Nov 13 01:26:34.400: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 Nov 13 01:26:34.400: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 Nov 13 01:26:34.400: INFO: Pod node-feature-discovery-worker-mm7xs requesting resource cpu=0m on Node node2 Nov 13 01:26:34.400: INFO: Pod node-feature-discovery-worker-zgr4c requesting resource cpu=0m on Node node1 Nov 13 01:26:34.400: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh requesting resource cpu=0m on Node node2 Nov 13 01:26:34.400: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 requesting resource cpu=0m on Node node1 Nov 13 01:26:34.400: INFO: Pod collectd-74xkn requesting resource cpu=0m on Node node1 Nov 13 01:26:34.400: INFO: Pod collectd-mp2z6 requesting resource cpu=0m on Node node2 Nov 13 01:26:34.400: INFO: Pod node-exporter-hqkfs requesting resource cpu=112m on Node node1 Nov 13 01:26:34.400: INFO: Pod node-exporter-hstd9 requesting resource cpu=112m on Node node2 Nov 13 01:26:34.400: INFO: Pod prometheus-k8s-0 requesting resource cpu=200m on Node node1 Nov 13 01:26:34.400: INFO: Pod prometheus-operator-585ccfb458-qcz7s requesting resource cpu=100m on Node node1 Nov 13 01:26:34.400: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-q7m54 requesting resource cpu=0m on Node node2 STEP: Starting Pods to consume most of the cluster CPU. Nov 13 01:26:34.400: INFO: Creating a pod which consumes cpu=53419m on Node node1 Nov 13 01:26:34.411: INFO: Creating a pod which consumes cpu=53594m on Node node2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-3fb78787-d0de-4abe-a874-2f799c0887a3.16b6f6ba4123b862], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6670/filler-pod-3fb78787-d0de-4abe-a874-2f799c0887a3 to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-3fb78787-d0de-4abe-a874-2f799c0887a3.16b6f6ba98a9dcb8], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-3fb78787-d0de-4abe-a874-2f799c0887a3.16b6f6bab07086d8], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 398.888332ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-3fb78787-d0de-4abe-a874-2f799c0887a3.16b6f6bab8fe233b], Reason = [Created], Message = [Created container filler-pod-3fb78787-d0de-4abe-a874-2f799c0887a3] STEP: Considering event: Type = [Normal], Name = [filler-pod-3fb78787-d0de-4abe-a874-2f799c0887a3.16b6f6bac07ef39e], Reason = [Started], Message = [Started container filler-pod-3fb78787-d0de-4abe-a874-2f799c0887a3] STEP: Considering event: Type = [Normal], Name = [filler-pod-7e7bc683-37cb-404e-a354-d05f1c681815.16b6f6ba41a55cf6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6670/filler-pod-7e7bc683-37cb-404e-a354-d05f1c681815 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-7e7bc683-37cb-404e-a354-d05f1c681815.16b6f6ba9d6189f5], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-7e7bc683-37cb-404e-a354-d05f1c681815.16b6f6bab5fc384c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 412.782505ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-7e7bc683-37cb-404e-a354-d05f1c681815.16b6f6babd9e7456], Reason = [Created], Message = [Created container filler-pod-7e7bc683-37cb-404e-a354-d05f1c681815] STEP: Considering event: Type = [Normal], Name = [filler-pod-7e7bc683-37cb-404e-a354-d05f1c681815.16b6f6bac58244d2], Reason = [Started], Message = [Started container filler-pod-7e7bc683-37cb-404e-a354-d05f1c681815] STEP: Considering event: Type = [Warning], Name = [additional-pod.16b6f6bb31a29c8f], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:26:39.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6670" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.314 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":2,"skipped":2008,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:26:39.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:26:39.650: INFO: Create a RollingUpdate DaemonSet Nov 13 01:26:39.654: INFO: Check that daemon pods launch on every node of the cluster Nov 13 01:26:39.658: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:39.659: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:39.659: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:39.661: INFO: Number of nodes with available pods: 0 Nov 13 01:26:39.661: INFO: Node node1 is running more than one daemon pod Nov 13 01:26:40.666: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:40.666: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:40.666: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:40.669: INFO: Number of nodes with available pods: 0 Nov 13 01:26:40.669: INFO: Node node1 is running more than one daemon pod Nov 13 01:26:41.667: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:41.667: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:41.667: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:41.670: INFO: Number of nodes with available pods: 0 Nov 13 01:26:41.670: INFO: Node node1 is running more than one daemon pod Nov 13 01:26:42.667: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:42.667: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:42.667: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:42.670: INFO: Number of nodes with available pods: 2 Nov 13 01:26:42.670: INFO: Number of running nodes: 2, number of available pods: 2 Nov 13 01:26:42.670: INFO: Update the DaemonSet to trigger a rollout Nov 13 01:26:42.676: INFO: Updating DaemonSet daemon-set Nov 13 01:26:51.691: INFO: Roll back the DaemonSet before rollout is complete Nov 13 01:26:51.698: INFO: Updating DaemonSet daemon-set Nov 13 01:26:51.698: INFO: Make sure DaemonSet rollback is complete Nov 13 01:26:51.701: INFO: Wrong image for pod: daemon-set-c4hrs. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Nov 13 01:26:51.701: INFO: Pod daemon-set-c4hrs is not available Nov 13 01:26:51.706: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:51.706: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:51.706: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:52.715: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:52.715: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:52.715: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:53.715: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:53.715: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:53.715: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:54.711: INFO: Pod daemon-set-58z9n is not available Nov 13 01:26:54.718: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:54.718: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:26:54.718: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4685, will wait for the garbage collector to delete the pods Nov 13 01:26:54.788: INFO: Deleting DaemonSet.extensions daemon-set took: 5.683529ms Nov 13 01:26:54.888: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.785793ms Nov 13 01:27:01.491: INFO: Number of nodes with available pods: 0 Nov 13 01:27:01.491: INFO: Number of running nodes: 0, number of available pods: 0 Nov 13 01:27:01.498: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"99390"},"items":null} Nov 13 01:27:01.501: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"99391"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:27:01.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4685" for this suite. • [SLOW TEST:21.910 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":3,"skipped":2285,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:27:01.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:27:01.556: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Nov 13 01:27:01.571: INFO: Number of nodes with available pods: 0 Nov 13 01:27:01.571: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Nov 13 01:27:01.588: INFO: Number of nodes with available pods: 0 Nov 13 01:27:01.588: INFO: Node node2 is running more than one daemon pod Nov 13 01:27:02.592: INFO: Number of nodes with available pods: 0 Nov 13 01:27:02.592: INFO: Node node2 is running more than one daemon pod Nov 13 01:27:03.594: INFO: Number of nodes with available pods: 0 Nov 13 01:27:03.594: INFO: Node node2 is running more than one daemon pod Nov 13 01:27:04.593: INFO: Number of nodes with available pods: 1 Nov 13 01:27:04.593: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Nov 13 01:27:04.609: INFO: Number of nodes with available pods: 1 Nov 13 01:27:04.609: INFO: Number of running nodes: 0, number of available pods: 1 Nov 13 01:27:05.613: INFO: Number of nodes with available pods: 0 Nov 13 01:27:05.613: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Nov 13 01:27:05.622: INFO: Number of nodes with available pods: 0 Nov 13 01:27:05.623: INFO: Node node2 is running more than one daemon pod Nov 13 01:27:06.625: INFO: Number of nodes with available pods: 0 Nov 13 01:27:06.625: INFO: Node node2 is running more than one daemon pod Nov 13 01:27:07.627: INFO: Number of nodes with available pods: 0 Nov 13 01:27:07.627: INFO: Node node2 is running more than one daemon pod Nov 13 01:27:08.626: INFO: Number of nodes with available pods: 0 Nov 13 01:27:08.626: INFO: Node node2 is running more than one daemon pod Nov 13 01:27:09.627: INFO: Number of nodes with available pods: 0 Nov 13 01:27:09.627: INFO: Node node2 is running more than one daemon pod Nov 13 01:27:10.626: INFO: Number of nodes with available pods: 0 Nov 13 01:27:10.626: INFO: Node node2 is running more than one daemon pod Nov 13 01:27:11.626: INFO: Number of nodes with available pods: 0 Nov 13 01:27:11.626: INFO: Node node2 is running more than one daemon pod Nov 13 01:27:12.627: INFO: Number of nodes with available pods: 0 Nov 13 01:27:12.627: INFO: Node node2 is running more than one daemon pod Nov 13 01:27:13.627: INFO: Number of nodes with available pods: 0 Nov 13 01:27:13.627: INFO: Node node2 is running more than one daemon pod Nov 13 01:27:14.627: INFO: Number of nodes with available pods: 1 Nov 13 01:27:14.628: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6305, will wait for the garbage collector to delete the pods Nov 13 01:27:14.692: INFO: Deleting DaemonSet.extensions daemon-set took: 5.169926ms Nov 13 01:27:14.793: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.476562ms Nov 13 01:27:17.997: INFO: Number of nodes with available pods: 0 Nov 13 01:27:17.997: INFO: Number of running nodes: 0, number of available pods: 0 Nov 13 01:27:18.000: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"99521"},"items":null} Nov 13 01:27:18.002: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"99521"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:27:18.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6305" for this suite. • [SLOW TEST:16.504 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":4,"skipped":2304,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:27:18.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Nov 13 01:27:18.332: INFO: Pod name wrapped-volume-race-82556a99-2e89-4065-b833-a8d2da5e1500: Found 1 pods out of 5 Nov 13 01:27:23.340: INFO: Pod name wrapped-volume-race-82556a99-2e89-4065-b833-a8d2da5e1500: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-82556a99-2e89-4065-b833-a8d2da5e1500 in namespace emptydir-wrapper-8701, will wait for the garbage collector to delete the pods Nov 13 01:27:37.418: INFO: Deleting ReplicationController wrapped-volume-race-82556a99-2e89-4065-b833-a8d2da5e1500 took: 5.713067ms Nov 13 01:27:37.518: INFO: Terminating ReplicationController wrapped-volume-race-82556a99-2e89-4065-b833-a8d2da5e1500 pods took: 100.103195ms STEP: Creating RC which spawns configmap-volume pods Nov 13 01:27:51.535: INFO: Pod name wrapped-volume-race-51a49d67-8d9a-4b22-b0a6-5dd7c4fae854: Found 0 pods out of 5 Nov 13 01:27:56.542: INFO: Pod name wrapped-volume-race-51a49d67-8d9a-4b22-b0a6-5dd7c4fae854: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-51a49d67-8d9a-4b22-b0a6-5dd7c4fae854 in namespace emptydir-wrapper-8701, will wait for the garbage collector to delete the pods Nov 13 01:28:10.624: INFO: Deleting ReplicationController wrapped-volume-race-51a49d67-8d9a-4b22-b0a6-5dd7c4fae854 took: 5.791088ms Nov 13 01:28:10.725: INFO: Terminating ReplicationController wrapped-volume-race-51a49d67-8d9a-4b22-b0a6-5dd7c4fae854 pods took: 100.850749ms STEP: Creating RC which spawns configmap-volume pods Nov 13 01:28:21.442: INFO: Pod name wrapped-volume-race-8b1070d7-8325-447c-82ba-1444806861ad: Found 0 pods out of 5 Nov 13 01:28:26.451: INFO: Pod name wrapped-volume-race-8b1070d7-8325-447c-82ba-1444806861ad: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8b1070d7-8325-447c-82ba-1444806861ad in namespace emptydir-wrapper-8701, will wait for the garbage collector to delete the pods Nov 13 01:28:40.532: INFO: Deleting ReplicationController wrapped-volume-race-8b1070d7-8325-447c-82ba-1444806861ad took: 5.208974ms Nov 13 01:28:40.632: INFO: Terminating ReplicationController wrapped-volume-race-8b1070d7-8325-447c-82ba-1444806861ad pods took: 100.209468ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:28:51.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8701" for this suite. • [SLOW TEST:93.596 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":5,"skipped":2343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:28:51.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Nov 13 01:28:51.664: INFO: Waiting up to 1m0s for all nodes to be ready Nov 13 01:29:51.739: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:29:51.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Nov 13 01:29:55.806: INFO: found a healthy node: node2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:30:11.862: INFO: pods created so far: [1 1 1] Nov 13 01:30:11.862: INFO: length of pods created so far: 3 Nov 13 01:30:15.878: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:30:22.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7739" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:30:22.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7853" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:91.324 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":6,"skipped":2696,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:30:22.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Nov 13 01:30:22.987: INFO: Waiting up to 1m0s for all nodes to be ready Nov 13 01:31:23.048: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:31:23.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:31:23.089: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Nov 13 01:31:23.098: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:31:23.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-1813" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:31:23.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7196" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.221 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":7,"skipped":2777,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:31:23.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Nov 13 01:31:23.218: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:23.219: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:23.219: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:23.221: INFO: Number of nodes with available pods: 0 Nov 13 01:31:23.221: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:24.226: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:24.226: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:24.226: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:24.228: INFO: Number of nodes with available pods: 0 Nov 13 01:31:24.229: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:25.227: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:25.228: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:25.228: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:25.230: INFO: Number of nodes with available pods: 0 Nov 13 01:31:25.230: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:26.227: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:26.227: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:26.227: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:26.231: INFO: Number of nodes with available pods: 2 Nov 13 01:31:26.231: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Nov 13 01:31:26.244: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:26.244: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:26.244: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:26.247: INFO: Number of nodes with available pods: 1 Nov 13 01:31:26.247: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:27.254: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:27.254: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:27.254: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:27.257: INFO: Number of nodes with available pods: 1 Nov 13 01:31:27.257: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:28.252: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:28.252: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:28.252: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:28.255: INFO: Number of nodes with available pods: 1 Nov 13 01:31:28.255: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:29.255: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:29.255: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:29.255: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:29.257: INFO: Number of nodes with available pods: 1 Nov 13 01:31:29.258: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:30.256: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:30.256: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:30.256: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:30.259: INFO: Number of nodes with available pods: 1 Nov 13 01:31:30.259: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:31.256: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:31.256: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:31.256: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:31.260: INFO: Number of nodes with available pods: 1 Nov 13 01:31:31.260: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:32.255: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:32.255: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:32.255: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:32.258: INFO: Number of nodes with available pods: 1 Nov 13 01:31:32.258: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:33.254: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:33.254: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:33.254: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:33.257: INFO: Number of nodes with available pods: 1 Nov 13 01:31:33.257: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:34.253: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:34.253: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:34.253: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:34.256: INFO: Number of nodes with available pods: 1 Nov 13 01:31:34.256: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:35.254: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:35.254: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:35.254: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:35.257: INFO: Number of nodes with available pods: 1 Nov 13 01:31:35.257: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:36.253: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:36.253: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:36.253: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:36.256: INFO: Number of nodes with available pods: 1 Nov 13 01:31:36.256: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:37.255: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:37.255: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:37.255: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:37.258: INFO: Number of nodes with available pods: 1 Nov 13 01:31:37.258: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:38.254: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:38.254: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:38.254: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:38.258: INFO: Number of nodes with available pods: 1 Nov 13 01:31:38.258: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:39.254: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:39.254: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:39.254: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:39.257: INFO: Number of nodes with available pods: 1 Nov 13 01:31:39.257: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:40.254: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:40.254: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:40.254: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:40.257: INFO: Number of nodes with available pods: 1 Nov 13 01:31:40.257: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:41.253: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:41.253: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:41.253: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:41.256: INFO: Number of nodes with available pods: 1 Nov 13 01:31:41.256: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:42.255: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:42.255: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:42.255: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:42.258: INFO: Number of nodes with available pods: 1 Nov 13 01:31:42.258: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:43.253: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:43.253: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:43.253: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:43.256: INFO: Number of nodes with available pods: 1 Nov 13 01:31:43.256: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:44.253: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:44.253: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:44.253: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:44.256: INFO: Number of nodes with available pods: 1 Nov 13 01:31:44.256: INFO: Node node1 is running more than one daemon pod Nov 13 01:31:45.254: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:45.254: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:45.254: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:31:45.257: INFO: Number of nodes with available pods: 2 Nov 13 01:31:45.257: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1623, will wait for the garbage collector to delete the pods Nov 13 01:31:45.320: INFO: Deleting DaemonSet.extensions daemon-set took: 5.512018ms Nov 13 01:31:45.421: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.692ms Nov 13 01:31:51.423: INFO: Number of nodes with available pods: 0 Nov 13 01:31:51.423: INFO: Number of running nodes: 0, number of available pods: 0 Nov 13 01:31:51.425: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"101383"},"items":null} Nov 13 01:31:51.427: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"101383"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:31:51.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1623" for this suite. • [SLOW TEST:28.269 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":8,"skipped":2781,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:31:51.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Nov 13 01:31:51.478: INFO: Waiting up to 1m0s for all nodes to be ready Nov 13 01:32:51.529: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Nov 13 01:32:51.555: INFO: Created pod: pod0-sched-preemption-low-priority Nov 13 01:32:51.576: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:33:13.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8734" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:82.222 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":9,"skipped":2782,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:33:13.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Nov 13 01:33:13.729: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:13.729: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:13.729: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:13.731: INFO: Number of nodes with available pods: 0 Nov 13 01:33:13.731: INFO: Node node1 is running more than one daemon pod Nov 13 01:33:14.739: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:14.740: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:14.740: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:14.744: INFO: Number of nodes with available pods: 0 Nov 13 01:33:14.744: INFO: Node node1 is running more than one daemon pod Nov 13 01:33:15.738: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:15.738: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:15.738: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:15.741: INFO: Number of nodes with available pods: 0 Nov 13 01:33:15.741: INFO: Node node1 is running more than one daemon pod Nov 13 01:33:16.738: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:16.738: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:16.739: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:16.741: INFO: Number of nodes with available pods: 1 Nov 13 01:33:16.741: INFO: Node node1 is running more than one daemon pod Nov 13 01:33:17.736: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:17.736: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:17.736: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:17.739: INFO: Number of nodes with available pods: 2 Nov 13 01:33:17.739: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Nov 13 01:33:17.755: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:17.755: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:17.755: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:17.758: INFO: Number of nodes with available pods: 1 Nov 13 01:33:17.758: INFO: Node node2 is running more than one daemon pod Nov 13 01:33:18.763: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:18.763: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:18.763: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:18.766: INFO: Number of nodes with available pods: 1 Nov 13 01:33:18.766: INFO: Node node2 is running more than one daemon pod Nov 13 01:33:19.762: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:19.762: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:19.762: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:19.764: INFO: Number of nodes with available pods: 1 Nov 13 01:33:19.764: INFO: Node node2 is running more than one daemon pod Nov 13 01:33:20.766: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:20.766: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:20.767: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:20.770: INFO: Number of nodes with available pods: 1 Nov 13 01:33:20.770: INFO: Node node2 is running more than one daemon pod Nov 13 01:33:21.763: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:21.763: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:21.763: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:21.766: INFO: Number of nodes with available pods: 2 Nov 13 01:33:21.766: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6573, will wait for the garbage collector to delete the pods Nov 13 01:33:21.827: INFO: Deleting DaemonSet.extensions daemon-set took: 4.00119ms Nov 13 01:33:21.928: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.769333ms Nov 13 01:33:31.532: INFO: Number of nodes with available pods: 0 Nov 13 01:33:31.532: INFO: Number of running nodes: 0, number of available pods: 0 Nov 13 01:33:31.534: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"101807"},"items":null} Nov 13 01:33:31.537: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"101807"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:33:31.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6573" for this suite. • [SLOW TEST:17.877 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":10,"skipped":3485,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:33:31.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 01:33:31.589: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 01:33:31.597: INFO: Waiting for terminating namespaces to be deleted... Nov 13 01:33:31.599: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 01:33:31.609: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 01:33:31.609: INFO: Container nodereport ready: true, restart count 0 Nov 13 01:33:31.609: INFO: Container reconcile ready: true, restart count 0 Nov 13 01:33:31.609: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 01:33:31.609: INFO: Container discover ready: false, restart count 0 Nov 13 01:33:31.609: INFO: Container init ready: false, restart count 0 Nov 13 01:33:31.609: INFO: Container install ready: false, restart count 0 Nov 13 01:33:31.609: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 01:33:31.609: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 01:33:31.609: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 01:33:31.609: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 01:33:31.609: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 01:33:31.609: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:33:31.609: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 01:33:31.609: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 01:33:31.609: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 01:33:31.609: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 01:33:31.609: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 01:33:31.609: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 01:33:31.609: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 01:33:31.609: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 01:33:31.609: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 01:33:31.609: INFO: Container collectd ready: true, restart count 0 Nov 13 01:33:31.609: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 01:33:31.609: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 01:33:31.609: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 01:33:31.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:33:31.609: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:33:31.609: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 01:33:31.609: INFO: Container config-reloader ready: true, restart count 0 Nov 13 01:33:31.609: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 01:33:31.609: INFO: Container grafana ready: true, restart count 0 Nov 13 01:33:31.609: INFO: Container prometheus ready: true, restart count 1 Nov 13 01:33:31.609: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 01:33:31.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:33:31.609: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 01:33:31.609: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 01:33:31.619: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 01:33:31.619: INFO: Container discover ready: false, restart count 0 Nov 13 01:33:31.619: INFO: Container init ready: false, restart count 0 Nov 13 01:33:31.619: INFO: Container install ready: false, restart count 0 Nov 13 01:33:31.619: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 01:33:31.619: INFO: Container nodereport ready: true, restart count 0 Nov 13 01:33:31.619: INFO: Container reconcile ready: true, restart count 0 Nov 13 01:33:31.619: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 01:33:31.619: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 01:33:31.619: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 01:33:31.619: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:33:31.619: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 01:33:31.619: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 01:33:31.619: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 01:33:31.619: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 01:33:31.619: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 01:33:31.619: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 01:33:31.619: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 01:33:31.619: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 01:33:31.619: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 01:33:31.619: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 01:33:31.619: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 01:33:31.619: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 01:33:31.619: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 01:33:31.619: INFO: Container collectd ready: true, restart count 0 Nov 13 01:33:31.619: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 01:33:31.619: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 01:33:31.619: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 01:33:31.619: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:33:31.619: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:33:31.619: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 01:33:31.619: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16b6f71b66517a85], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:33:32.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3143" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":11,"skipped":4214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:33:32.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 01:33:32.696: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 01:33:32.705: INFO: Waiting for terminating namespaces to be deleted... Nov 13 01:33:32.708: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 01:33:32.718: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 01:33:32.718: INFO: Container nodereport ready: true, restart count 0 Nov 13 01:33:32.718: INFO: Container reconcile ready: true, restart count 0 Nov 13 01:33:32.718: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 01:33:32.718: INFO: Container discover ready: false, restart count 0 Nov 13 01:33:32.718: INFO: Container init ready: false, restart count 0 Nov 13 01:33:32.718: INFO: Container install ready: false, restart count 0 Nov 13 01:33:32.718: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 01:33:32.718: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 01:33:32.718: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 01:33:32.718: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 01:33:32.718: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 01:33:32.718: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:33:32.718: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 01:33:32.718: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 01:33:32.718: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 01:33:32.718: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 01:33:32.718: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 01:33:32.718: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 01:33:32.718: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 01:33:32.718: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 01:33:32.718: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 01:33:32.718: INFO: Container collectd ready: true, restart count 0 Nov 13 01:33:32.718: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 01:33:32.718: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 01:33:32.718: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 01:33:32.718: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:33:32.718: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:33:32.718: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 01:33:32.718: INFO: Container config-reloader ready: true, restart count 0 Nov 13 01:33:32.718: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 01:33:32.718: INFO: Container grafana ready: true, restart count 0 Nov 13 01:33:32.718: INFO: Container prometheus ready: true, restart count 1 Nov 13 01:33:32.718: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 01:33:32.718: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:33:32.718: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 01:33:32.718: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 01:33:32.738: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 01:33:32.738: INFO: Container discover ready: false, restart count 0 Nov 13 01:33:32.738: INFO: Container init ready: false, restart count 0 Nov 13 01:33:32.738: INFO: Container install ready: false, restart count 0 Nov 13 01:33:32.738: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 01:33:32.738: INFO: Container nodereport ready: true, restart count 0 Nov 13 01:33:32.738: INFO: Container reconcile ready: true, restart count 0 Nov 13 01:33:32.738: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 01:33:32.738: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 01:33:32.738: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 01:33:32.738: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:33:32.738: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 01:33:32.738: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 01:33:32.738: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 01:33:32.738: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 01:33:32.738: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 01:33:32.738: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 01:33:32.738: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 01:33:32.738: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 01:33:32.738: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 01:33:32.738: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 01:33:32.738: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 01:33:32.738: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 01:33:32.738: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 01:33:32.738: INFO: Container collectd ready: true, restart count 0 Nov 13 01:33:32.738: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 01:33:32.738: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 01:33:32.738: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 01:33:32.738: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:33:32.738: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:33:32.738: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 01:33:32.738: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-205e59fe-866e-4fe0-a8b6-6fb21a14bedc 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-205e59fe-866e-4fe0-a8b6-6fb21a14bedc off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-205e59fe-866e-4fe0-a8b6-6fb21a14bedc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:33:40.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9079" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.166 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":12,"skipped":4803,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:33:40.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 13 01:33:40.886: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Nov 13 01:33:40.897: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:40.898: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:40.898: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:40.902: INFO: Number of nodes with available pods: 0 Nov 13 01:33:40.902: INFO: Node node1 is running more than one daemon pod Nov 13 01:33:41.908: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:41.908: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:41.909: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:41.912: INFO: Number of nodes with available pods: 0 Nov 13 01:33:41.912: INFO: Node node1 is running more than one daemon pod Nov 13 01:33:42.909: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:42.909: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:42.909: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:42.912: INFO: Number of nodes with available pods: 0 Nov 13 01:33:42.912: INFO: Node node1 is running more than one daemon pod Nov 13 01:33:43.910: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:43.910: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:43.910: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:43.912: INFO: Number of nodes with available pods: 2 Nov 13 01:33:43.912: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Nov 13 01:33:43.935: INFO: Wrong image for pod: daemon-set-2j99v. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 13 01:33:43.939: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:43.939: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:43.939: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:44.945: INFO: Wrong image for pod: daemon-set-2j99v. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 13 01:33:44.949: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:44.949: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:44.949: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:45.944: INFO: Wrong image for pod: daemon-set-2j99v. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 13 01:33:45.949: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:45.949: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:45.949: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:46.943: INFO: Wrong image for pod: daemon-set-2j99v. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 13 01:33:46.947: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:46.947: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:46.947: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:47.943: INFO: Wrong image for pod: daemon-set-2j99v. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 13 01:33:47.943: INFO: Pod daemon-set-7wqm9 is not available Nov 13 01:33:47.947: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:47.947: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:47.947: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:48.944: INFO: Wrong image for pod: daemon-set-2j99v. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 13 01:33:48.944: INFO: Pod daemon-set-7wqm9 is not available Nov 13 01:33:48.949: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:48.949: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:48.949: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:49.947: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:49.947: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:49.947: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:50.949: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:50.949: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:50.949: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:51.951: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:51.951: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:51.951: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:52.947: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:52.947: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:52.947: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:53.948: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:53.948: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:53.948: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:54.950: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:54.950: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:54.950: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:55.947: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:55.947: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:55.947: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:56.951: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:56.952: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:56.952: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:57.950: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:57.950: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:57.950: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:58.948: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:58.948: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:58.948: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:59.950: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:59.950: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:33:59.950: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:34:00.951: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:34:00.951: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:34:00.951: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:34:01.944: INFO: Pod daemon-set-pt7nz is not available Nov 13 01:34:01.947: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:34:01.948: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:34:01.948: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Nov 13 01:34:01.952: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:34:01.952: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:34:01.952: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:34:01.954: INFO: Number of nodes with available pods: 1 Nov 13 01:34:01.954: INFO: Node node1 is running more than one daemon pod Nov 13 01:34:02.961: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:34:02.961: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:34:02.961: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:34:02.963: INFO: Number of nodes with available pods: 1 Nov 13 01:34:02.963: INFO: Node node1 is running more than one daemon pod Nov 13 01:34:03.961: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:34:03.961: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:34:03.961: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 13 01:34:03.963: INFO: Number of nodes with available pods: 2 Nov 13 01:34:03.963: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5297, will wait for the garbage collector to delete the pods Nov 13 01:34:04.043: INFO: Deleting DaemonSet.extensions daemon-set took: 5.889674ms Nov 13 01:34:04.143: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.620265ms Nov 13 01:34:11.446: INFO: Number of nodes with available pods: 0 Nov 13 01:34:11.446: INFO: Number of running nodes: 0, number of available pods: 0 Nov 13 01:34:11.449: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"102101"},"items":null} Nov 13 01:34:11.451: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"102101"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:34:11.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5297" for this suite. • [SLOW TEST:30.625 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":13,"skipped":5276,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:34:11.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:34:17.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2177" for this suite. STEP: Destroying namespace "nsdeletetest-7995" for this suite. Nov 13 01:34:17.559: INFO: Namespace nsdeletetest-7995 was already deleted STEP: Destroying namespace "nsdeletetest-219" for this suite. • [SLOW TEST:6.096 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":14,"skipped":5308,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:34:17.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 01:34:17.606: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 01:34:17.614: INFO: Waiting for terminating namespaces to be deleted... Nov 13 01:34:17.616: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 01:34:17.624: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 01:34:17.624: INFO: Container nodereport ready: true, restart count 0 Nov 13 01:34:17.624: INFO: Container reconcile ready: true, restart count 0 Nov 13 01:34:17.624: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 01:34:17.624: INFO: Container discover ready: false, restart count 0 Nov 13 01:34:17.624: INFO: Container init ready: false, restart count 0 Nov 13 01:34:17.624: INFO: Container install ready: false, restart count 0 Nov 13 01:34:17.624: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 01:34:17.624: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 01:34:17.624: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 01:34:17.624: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 01:34:17.624: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 01:34:17.624: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:34:17.624: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 01:34:17.624: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 01:34:17.624: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 01:34:17.624: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 01:34:17.624: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 01:34:17.624: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 01:34:17.624: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 01:34:17.624: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 01:34:17.624: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 01:34:17.624: INFO: Container collectd ready: true, restart count 0 Nov 13 01:34:17.624: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 01:34:17.624: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 01:34:17.624: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 01:34:17.624: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:34:17.624: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:34:17.624: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 01:34:17.624: INFO: Container config-reloader ready: true, restart count 0 Nov 13 01:34:17.624: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 01:34:17.624: INFO: Container grafana ready: true, restart count 0 Nov 13 01:34:17.624: INFO: Container prometheus ready: true, restart count 1 Nov 13 01:34:17.624: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 01:34:17.624: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:34:17.624: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 01:34:17.624: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 01:34:17.633: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 01:34:17.633: INFO: Container discover ready: false, restart count 0 Nov 13 01:34:17.633: INFO: Container init ready: false, restart count 0 Nov 13 01:34:17.633: INFO: Container install ready: false, restart count 0 Nov 13 01:34:17.633: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 01:34:17.633: INFO: Container nodereport ready: true, restart count 0 Nov 13 01:34:17.633: INFO: Container reconcile ready: true, restart count 0 Nov 13 01:34:17.633: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 01:34:17.633: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 01:34:17.633: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 01:34:17.633: INFO: Container kube-multus ready: true, restart count 1 Nov 13 01:34:17.633: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 01:34:17.633: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 01:34:17.633: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 01:34:17.633: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 01:34:17.633: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 01:34:17.633: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 01:34:17.633: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 01:34:17.633: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 01:34:17.633: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 01:34:17.633: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 01:34:17.633: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 01:34:17.633: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 01:34:17.633: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 01:34:17.634: INFO: Container collectd ready: true, restart count 0 Nov 13 01:34:17.634: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 01:34:17.634: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 01:34:17.634: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 01:34:17.634: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 01:34:17.634: INFO: Container node-exporter ready: true, restart count 0 Nov 13 01:34:17.634: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 01:34:17.634: INFO: Container tas-extender ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b10596c5-f746-481b-986d-ce6aa57d7504 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.10.190.208 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-b10596c5-f746-481b-986d-ce6aa57d7504 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-b10596c5-f746-481b-986d-ce6aa57d7504 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:39:25.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1922" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.204 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":15,"skipped":5422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:39:25.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Nov 13 01:39:25.805: INFO: Waiting up to 1m0s for all nodes to be ready Nov 13 01:40:25.878: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Nov 13 01:40:25.906: INFO: Created pod: pod0-sched-preemption-low-priority Nov 13 01:40:25.926: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:40:45.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-512" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:80.235 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":16,"skipped":5534,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 01:40:46.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 01:41:01.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1396" for this suite. STEP: Destroying namespace "nsdeletetest-9133" for this suite. Nov 13 01:41:01.124: INFO: Namespace nsdeletetest-9133 was already deleted STEP: Destroying namespace "nsdeletetest-4687" for this suite. • [SLOW TEST:15.119 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":17,"skipped":5720,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSNov 13 01:41:01.131: INFO: Running AfterSuite actions on all nodes Nov 13 01:41:01.131: INFO: Running AfterSuite actions on node 1 Nov 13 01:41:01.131: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5753,"failed":0} Ran 17 of 5770 Specs in 867.058 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5753 Skipped PASS Ginkgo ran 1 suite in 14m28.405447979s Test Suite Passed