I0321 23:21:38.082346 7 e2e.go:129] Starting e2e run "4cab3fb6-b338-4770-8eee-403bf1f5ae66" on Ginkgo node 1 {"msg":"Test Suite starting","total":16,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1616368896 - Will randomize all specs Will run 16 of 5737 specs Mar 21 23:21:38.102: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:38.104: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 21 23:21:38.125: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 21 23:21:39.238: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (1 seconds elapsed) Mar 21 23:21:39.238: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 23:21:39.238: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 21 23:21:39.822: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 21 23:21:39.822: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 21 23:21:39.822: INFO: e2e test version: v1.21.0-beta.1 Mar 21 23:21:39.824: INFO: kube-apiserver version: v1.21.0-alpha.0 Mar 21 23:21:39.824: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:40.025: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:21:40.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption Mar 21 23:21:42.484: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Mar 21 23:21:43.069: INFO: Waiting up to 1m0s for all nodes to be ready Mar 21 23:22:43.339: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node latest-worker. STEP: Apply 10 fake resource to node latest-worker2. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. Mar 21 23:23:00.638: FAIL: Unexpected error: <*errors.StatusError | 0xc003a2af00>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "pods \"high\" is forbidden: no PriorityClass with name sched-preemption-high-priority was found", Reason: "Forbidden", Details: {Name: "high", Group: "", Kind: "pods", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 403, }, } pods "high" is forbidden: no PriorityClass with name sched-preemption-high-priority was found occurred Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.createPausePod(0xc001013e40, 0x6b55e98, 0x4, 0xc003f4d740, 0x15, 0xc00251b548, 0x0, 0xc003f33260, 0x0, 0xc003053830, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:864 +0x1b9 k8s.io/kubernetes/test/e2e/scheduling.runPausePodWithTimeout(0xc001013e40, 0x6b55e98, 0x4, 0xc003f4d740, 0x15, 0xc00251b548, 0x0, 0xc003f33260, 0x0, 0xc003053830, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:873 +0x78 k8s.io/kubernetes/test/e2e/scheduling.runPausePod(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:869 k8s.io/kubernetes/test/e2e/scheduling.glob..func5.5.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:381 +0x8aa k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002b5ea80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002b5ea80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002b5ea80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "sched-preemption-7274". STEP: Found 10 events. Mar 21 23:23:03.615: INFO: At 2021-03-21 23:22:44 +0000 UTC - event for without-label: {default-scheduler } Scheduled: Successfully assigned sched-preemption-7274/without-label to latest-worker Mar 21 23:23:03.615: INFO: At 2021-03-21 23:22:46 +0000 UTC - event for without-label: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/pause:3.4.1" already present on machine Mar 21 23:23:03.615: INFO: At 2021-03-21 23:22:47 +0000 UTC - event for without-label: {kubelet latest-worker} Created: Created container without-label Mar 21 23:23:03.615: INFO: At 2021-03-21 23:22:48 +0000 UTC - event for without-label: {kubelet latest-worker} Started: Started container without-label Mar 21 23:23:03.616: INFO: At 2021-03-21 23:22:51 +0000 UTC - event for without-label: {kubelet latest-worker} Killing: Stopping container without-label Mar 21 23:23:03.616: INFO: At 2021-03-21 23:22:51 +0000 UTC - event for without-label: {default-scheduler } Scheduled: Successfully assigned sched-preemption-7274/without-label to latest-worker2 Mar 21 23:23:03.616: INFO: At 2021-03-21 23:22:54 +0000 UTC - event for without-label: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/pause:3.4.1" already present on machine Mar 21 23:23:03.616: INFO: At 2021-03-21 23:22:56 +0000 UTC - event for without-label: {kubelet latest-worker2} Created: Created container without-label Mar 21 23:23:03.616: INFO: At 2021-03-21 23:22:56 +0000 UTC - event for without-label: {kubelet latest-worker2} Started: Started container without-label Mar 21 23:23:03.616: INFO: At 2021-03-21 23:22:58 +0000 UTC - event for without-label: {kubelet latest-worker2} Killing: Stopping container without-label Mar 21 23:23:03.804: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:23:03.804: INFO: Mar 21 23:23:04.600: INFO: Logging node info for node latest-control-plane Mar 21 23:23:04.834: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6917517 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:19:27 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:19:27 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:19:27 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:19:27 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:23:04.835: INFO: Logging kubelet events for node latest-control-plane Mar 21 23:23:05.044: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 21 23:23:05.195: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.195: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 21 23:23:05.195: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.195: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 21 23:23:05.195: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.195: INFO: Container kube-scheduler ready: true, restart count 0 Mar 21 23:23:05.195: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.195: INFO: Container kube-apiserver ready: true, restart count 0 Mar 21 23:23:05.195: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.195: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:23:05.195: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.195: INFO: Container etcd ready: true, restart count 0 Mar 21 23:23:05.195: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.195: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:23:05.195: INFO: coredns-74ff55c5b-2wlxf started at 2021-03-21 17:53:32 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.195: INFO: Container coredns ready: true, restart count 0 Mar 21 23:23:05.195: INFO: no-snat-test9vrbk started at 2021-03-21 23:21:36 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.195: INFO: Container no-snat-test ready: false, restart count 0 W0321 23:23:05.232611 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:23:05.382: INFO: Latency metrics for node latest-control-plane Mar 21 23:23:05.382: INFO: Logging node info for node latest-worker Mar 21 23:23:05.485: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6919948 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 19:07:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 19:58:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-21 23:22:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:19:27 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:19:27 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:19:27 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:19:27 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:23:05.487: INFO: Logging kubelet events for node latest-worker Mar 21 23:23:05.500: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 21 23:23:05.513: INFO: rally-735e53de-v7lor5xk-nxxq5 started at 2021-03-21 23:22:14 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.513: INFO: Container rally-735e53de-v7lor5xk ready: false, restart count 0 Mar 21 23:23:05.514: INFO: no-snat-testgwdqz started at 2021-03-21 23:21:36 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container no-snat-test ready: false, restart count 0 Mar 21 23:23:05.514: INFO: ss2-0 started at 2021-03-21 23:22:49 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container webserver ready: true, restart count 0 Mar 21 23:23:05.514: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-2x7jr started at 2021-03-21 23:21:41 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:05.514: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-qswpc started at 2021-03-21 23:21:41 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:05.514: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-h9r94 started at 2021-03-21 23:21:40 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:05.514: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-pdv4w started at 2021-03-21 23:21:41 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:05.514: INFO: pod-client started at 2021-03-21 23:22:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container pod-client ready: true, restart count 0 Mar 21 23:23:05.514: INFO: chaos-daemon-qkndt started at 2021-03-21 18:05:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:23:05.514: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-msh9v started at 2021-03-21 23:21:40 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:05.514: INFO: rally-735e53de-v7lor5xk-tdgww started at 2021-03-21 23:22:14 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container rally-735e53de-v7lor5xk ready: false, restart count 0 Mar 21 23:23:05.514: INFO: kindnet-sbskd started at 2021-03-21 18:05:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:23:05.514: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-q4pvr started at 2021-03-21 23:21:40 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:05.514: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-zn56l started at 2021-03-21 23:21:41 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:05.514: INFO: ss2-1 started at 2021-03-21 23:22:56 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container webserver ready: true, restart count 0 Mar 21 23:23:05.514: INFO: inclusterclient started at 2021-03-21 23:22:57 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container inclusterclient ready: true, restart count 0 Mar 21 23:23:05.514: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-5f2np started at 2021-03-21 23:21:41 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:05.514: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-m2x4t started at 2021-03-21 23:21:40 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:05.514: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-69gq2 started at 2021-03-21 23:21:40 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:05.514: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-6b8zm started at 2021-03-21 23:21:41 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:05.514: INFO: netserver-0 started at 2021-03-21 23:23:02 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container webserver ready: false, restart count 0 Mar 21 23:23:05.514: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:05.514: INFO: Container kube-proxy ready: true, restart count 0 W0321 23:23:05.576200 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:23:05.932: INFO: Latency metrics for node latest-worker Mar 21 23:23:05.932: INFO: Logging node info for node latest-worker2 Mar 21 23:23:06.945: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6919951 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 18:46:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 19:58:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-21 23:22:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:19:27 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:19:27 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:19:27 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:19:27 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:23:06.946: INFO: Logging kubelet events for node latest-worker2 Mar 21 23:23:06.951: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 21 23:23:07.818: INFO: netserver-1 started at 2021-03-21 23:23:03 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container webserver ready: false, restart count 0 Mar 21 23:23:07.818: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-g6qvk started at 2021-03-21 23:21:41 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:07.818: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:23:07.818: INFO: pfpod started at 2021-03-21 23:22:56 +0000 UTC (0+2 container statuses recorded) Mar 21 23:23:07.818: INFO: Container portforwardtester ready: true, restart count 0 Mar 21 23:23:07.818: INFO: Container readiness ready: false, restart count 0 Mar 21 23:23:07.818: INFO: chaos-controller-manager-69c479c674-hcpp6 started at 2021-03-21 18:05:18 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:23:07.818: INFO: chaos-daemon-gfm87 started at 2021-03-21 17:24:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:23:07.818: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-8tk6k started at 2021-03-21 23:21:41 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:07.818: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-5pq24 started at 2021-03-21 23:21:41 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:07.818: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-sk44j started at 2021-03-21 23:21:41 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:07.818: INFO: run-log-test started at 2021-03-21 23:22:32 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container run-log-test ready: false, restart count 0 Mar 21 23:23:07.818: INFO: iperf2-clients-7dq4j started at 2021-03-21 23:21:26 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container iperf2-client ready: false, restart count 0 Mar 21 23:23:07.818: INFO: coredns-74ff55c5b-rb257 started at 2021-03-21 17:53:30 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container coredns ready: true, restart count 0 Mar 21 23:23:07.818: INFO: kindnet-lhbxs started at 2021-03-21 17:24:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:23:07.818: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-8qswq started at 2021-03-21 23:21:40 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:07.818: INFO: pod-server-2 started at 2021-03-21 23:22:38 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container agnhost-container ready: true, restart count 0 Mar 21 23:23:07.818: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-zdqxv started at 2021-03-21 23:21:40 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:07.818: INFO: ss2-2 started at 2021-03-21 23:23:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container webserver ready: false, restart count 0 Mar 21 23:23:07.818: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-bhvqd started at 2021-03-21 23:21:41 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:07.818: INFO: httpd-deployment-948b4c64c-x864q started at 2021-03-21 23:21:53 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container httpd ready: false, restart count 0 Mar 21 23:23:07.818: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-kck7c started at 2021-03-21 23:21:41 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:07.818: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-srml5 started at 2021-03-21 23:21:41 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:07.818: INFO: no-snat-testz5x7q started at 2021-03-21 23:21:36 +0000 UTC (0+1 container statuses recorded) Mar 21 23:23:07.818: INFO: Container no-snat-test ready: false, restart count 0 W0321 23:23:07.986044 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:23:08.461: INFO: Latency metrics for node latest-worker2 Mar 21 23:23:08.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7274" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • Failure [89.096 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 Mar 21 23:23:00.638: Unexpected error: <*errors.StatusError | 0xc003a2af00>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "pods \"high\" is forbidden: no PriorityClass with name sched-preemption-high-priority was found", Reason: "Forbidden", Details: {Name: "high", Group: "", Kind: "pods", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 403, }, } pods "high" is forbidden: no PriorityClass with name sched-preemption-high-priority was found occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:864 ------------------------------ {"msg":"FAILED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":16,"completed":0,"skipped":11,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:72 [BeforeEach] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:23:09.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename multi-az STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:47 Mar 21 23:23:09.862: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:23:09.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-5689" for this suite. [AfterEach] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:67 S [SKIPPING] in Spec Setup (BeforeEach) [0.878 seconds] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should spread the pods of a service across zones [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:72 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:48 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:23:10.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 21 23:23:10.069: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 21 23:23:10.090: INFO: Waiting for terminating namespaces to be deleted... Mar 21 23:23:10.095: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 21 23:23:10.132: INFO: rally-735e53de-v7lor5xk-nxxq5 from c-rally-735e53de-hq1pu3q5 started at 2021-03-21 23:22:14 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container rally-735e53de-v7lor5xk ready: false, restart count 0 Mar 21 23:23:10.132: INFO: rally-735e53de-v7lor5xk-tdgww from c-rally-735e53de-hq1pu3q5 started at 2021-03-21 23:22:14 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container rally-735e53de-v7lor5xk ready: false, restart count 0 Mar 21 23:23:10.132: INFO: pod-client from conntrack-3032 started at 2021-03-21 23:22:05 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container pod-client ready: true, restart count 0 Mar 21 23:23:10.132: INFO: chaos-daemon-qkndt from default started at 2021-03-21 18:05:47 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:23:10.132: INFO: kindnet-sbskd from kube-system started at 2021-03-21 18:05:46 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:23:10.132: INFO: kube-proxy-5wvjm from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:23:10.132: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-2x7jr from kubelet-6692 started at 2021-03-21 23:21:41 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.132: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-5f2np from kubelet-6692 started at 2021-03-21 23:21:41 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.132: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-69gq2 from kubelet-6692 started at 2021-03-21 23:21:40 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.132: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-6b8zm from kubelet-6692 started at 2021-03-21 23:21:41 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.132: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-h9r94 from kubelet-6692 started at 2021-03-21 23:21:40 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.132: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-m2x4t from kubelet-6692 started at 2021-03-21 23:21:40 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.132: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-msh9v from kubelet-6692 started at 2021-03-21 23:21:40 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.132: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-pdv4w from kubelet-6692 started at 2021-03-21 23:21:41 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.132: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-q4pvr from kubelet-6692 started at 2021-03-21 23:21:40 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.132: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-qswpc from kubelet-6692 started at 2021-03-21 23:21:41 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.132: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-zn56l from kubelet-6692 started at 2021-03-21 23:21:41 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.132: INFO: netserver-0 from nettest-8060 started at 2021-03-21 23:23:02 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container webserver ready: false, restart count 0 Mar 21 23:23:10.132: INFO: no-snat-testgwdqz from no-snat-test-8788 started at 2021-03-21 23:21:36 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container no-snat-test ready: false, restart count 0 Mar 21 23:23:10.132: INFO: ss2-0 from statefulset-7908 started at 2021-03-21 23:22:49 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container webserver ready: true, restart count 0 Mar 21 23:23:10.132: INFO: ss2-1 from statefulset-7908 started at 2021-03-21 23:22:56 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container webserver ready: true, restart count 0 Mar 21 23:23:10.132: INFO: inclusterclient from svcaccounts-7255 started at 2021-03-21 23:22:57 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.132: INFO: Container inclusterclient ready: true, restart count 0 Mar 21 23:23:10.132: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 21 23:23:10.170: INFO: pod-server-2 from conntrack-3032 started at 2021-03-21 23:22:38 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container agnhost-container ready: true, restart count 0 Mar 21 23:23:10.170: INFO: chaos-controller-manager-69c479c674-hcpp6 from default started at 2021-03-21 18:05:18 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:23:10.170: INFO: chaos-daemon-gfm87 from default started at 2021-03-21 17:24:47 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:23:10.170: INFO: coredns-74ff55c5b-rb257 from kube-system started at 2021-03-21 17:53:30 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container coredns ready: true, restart count 0 Mar 21 23:23:10.170: INFO: kindnet-lhbxs from kube-system started at 2021-03-21 17:24:47 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:23:10.170: INFO: kube-proxy-7q92q from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:23:10.170: INFO: httpd-deployment-948b4c64c-x864q from kubectl-7963 started at 2021-03-21 23:21:53 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container httpd ready: false, restart count 0 Mar 21 23:23:10.170: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-5pq24 from kubelet-6692 started at 2021-03-21 23:21:41 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.170: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-8qswq from kubelet-6692 started at 2021-03-21 23:21:40 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.170: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-8tk6k from kubelet-6692 started at 2021-03-21 23:21:41 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.170: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-bhvqd from kubelet-6692 started at 2021-03-21 23:21:41 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.170: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-g6qvk from kubelet-6692 started at 2021-03-21 23:21:41 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.170: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-kck7c from kubelet-6692 started at 2021-03-21 23:21:41 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.170: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-sk44j from kubelet-6692 started at 2021-03-21 23:21:41 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.170: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-srml5 from kubelet-6692 started at 2021-03-21 23:21:41 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.170: INFO: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1-zdqxv from kubelet-6692 started at 2021-03-21 23:21:40 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 ready: false, restart count 0 Mar 21 23:23:10.170: INFO: netserver-1 from nettest-8060 started at 2021-03-21 23:23:03 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container webserver ready: false, restart count 0 Mar 21 23:23:10.170: INFO: iperf2-clients-7dq4j from network-perf-8958 started at 2021-03-21 23:21:26 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container iperf2-client ready: false, restart count 0 Mar 21 23:23:10.170: INFO: no-snat-testz5x7q from no-snat-test-8788 started at 2021-03-21 23:21:36 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container no-snat-test ready: false, restart count 0 Mar 21 23:23:10.170: INFO: pfpod from port-forwarding-6290 started at 2021-03-21 23:22:56 +0000 UTC (2 container statuses recorded) Mar 21 23:23:10.170: INFO: Container portforwardtester ready: true, restart count 0 Mar 21 23:23:10.170: INFO: Container readiness ready: true, restart count 0 Mar 21 23:23:10.170: INFO: ss2-2 from statefulset-7908 started at 2021-03-21 23:23:05 +0000 UTC (1 container statuses recorded) Mar 21 23:23:10.170: INFO: Container webserver ready: false, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-51ec6cca-33b6-4289-b6b9-3592d67cb8b4=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-0df2301c-2211-4cb9-8847-bc53ecede719 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-0df2301c-2211-4cb9-8847-bc53ecede719 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-0df2301c-2211-4cb9-8847-bc53ecede719 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-51ec6cca-33b6-4289-b6b9-3592d67cb8b4=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:23:25.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1925" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:15.539 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":16,"completed":1,"skipped":988,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:262 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:23:25.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 Mar 21 23:23:25.821: INFO: Waiting up to 1m0s for all nodes to be ready Mar 21 23:24:26.353: INFO: Waiting for terminating namespaces to be deleted... Mar 21 23:24:26.562: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 21 23:24:27.930: INFO: The status of Pod coredns-74ff55c5b-sbpzx is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:27.930: INFO: The status of Pod coredns-74ff55c5b-thqtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:27.930: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (1 seconds elapsed) Mar 21 23:24:27.930: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Mar 21 23:24:27.930: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:24:27.930: INFO: coredns-74ff55c5b-sbpzx latest-control-plane Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:25 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:25 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:25 +0000 UTC }] Mar 21 23:24:27.930: INFO: coredns-74ff55c5b-thqtl latest-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:25 +0000 UTC }] Mar 21 23:24:27.930: INFO: Mar 21 23:24:30.838: INFO: The status of Pod coredns-74ff55c5b-sbpzx is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:30.838: INFO: The status of Pod coredns-74ff55c5b-thqtl is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:30.838: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Mar 21 23:24:30.838: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Mar 21 23:24:30.838: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:24:30.838: INFO: coredns-74ff55c5b-sbpzx latest-control-plane Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:25 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:25 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:25 +0000 UTC }] Mar 21 23:24:30.838: INFO: coredns-74ff55c5b-thqtl latest-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:25 +0000 UTC }] Mar 21 23:24:30.838: INFO: Mar 21 23:24:32.161: INFO: The status of Pod coredns-74ff55c5b-thqtl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:32.161: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (5 seconds elapsed) Mar 21 23:24:32.161: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Mar 21 23:24:32.161: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:24:32.161: INFO: coredns-74ff55c5b-thqtl latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:25 +0000 UTC }] Mar 21 23:24:32.161: INFO: Mar 21 23:24:34.145: INFO: The status of Pod coredns-74ff55c5b-thqtl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:34.145: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (7 seconds elapsed) Mar 21 23:24:34.145: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Mar 21 23:24:34.145: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:24:34.145: INFO: coredns-74ff55c5b-thqtl latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:25 +0000 UTC }] Mar 21 23:24:34.145: INFO: Mar 21 23:24:36.241: INFO: The status of Pod coredns-74ff55c5b-thqtl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:36.241: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (9 seconds elapsed) Mar 21 23:24:36.241: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Mar 21 23:24:36.241: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:24:36.241: INFO: coredns-74ff55c5b-thqtl latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:25 +0000 UTC }] Mar 21 23:24:36.241: INFO: Mar 21 23:24:39.198: INFO: The status of Pod coredns-74ff55c5b-thqtl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:39.198: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) Mar 21 23:24:39.198: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Mar 21 23:24:39.198: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:24:39.198: INFO: coredns-74ff55c5b-thqtl latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:26 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:25 +0000 UTC }] Mar 21 23:24:39.198: INFO: Mar 21 23:24:40.936: INFO: The status of Pod coredns-74ff55c5b-kcjgk is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:40.936: INFO: The status of Pod coredns-74ff55c5b-lv4vw is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:40.936: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (14 seconds elapsed) Mar 21 23:24:40.936: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Mar 21 23:24:40.936: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:24:40.936: INFO: coredns-74ff55c5b-kcjgk latest-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:24:40.936: INFO: coredns-74ff55c5b-lv4vw latest-control-plane Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:24:40.936: INFO: Mar 21 23:24:42.732: INFO: The status of Pod coredns-74ff55c5b-kcjgk is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:42.732: INFO: The status of Pod coredns-74ff55c5b-lv4vw is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:42.732: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (16 seconds elapsed) Mar 21 23:24:42.732: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Mar 21 23:24:42.732: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:24:42.732: INFO: coredns-74ff55c5b-kcjgk latest-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:24:42.732: INFO: coredns-74ff55c5b-lv4vw latest-control-plane Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:24:42.732: INFO: Mar 21 23:24:44.693: INFO: The status of Pod coredns-74ff55c5b-kcjgk is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:44.693: INFO: The status of Pod coredns-74ff55c5b-lv4vw is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:44.693: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (18 seconds elapsed) Mar 21 23:24:44.693: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Mar 21 23:24:44.693: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:24:44.693: INFO: coredns-74ff55c5b-kcjgk latest-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:24:44.693: INFO: coredns-74ff55c5b-lv4vw latest-control-plane Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:24:44.693: INFO: Mar 21 23:24:46.118: INFO: The status of Pod coredns-74ff55c5b-kcjgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:46.118: INFO: The status of Pod coredns-74ff55c5b-lv4vw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:46.118: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (19 seconds elapsed) Mar 21 23:24:46.118: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Mar 21 23:24:46.118: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:24:46.118: INFO: coredns-74ff55c5b-kcjgk latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:24:46.118: INFO: coredns-74ff55c5b-lv4vw latest-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:24:46.118: INFO: Mar 21 23:24:48.976: INFO: The status of Pod coredns-74ff55c5b-kcjgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:48.976: INFO: The status of Pod coredns-74ff55c5b-lv4vw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:48.976: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (22 seconds elapsed) Mar 21 23:24:48.976: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Mar 21 23:24:48.976: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:24:48.976: INFO: coredns-74ff55c5b-kcjgk latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:24:48.976: INFO: coredns-74ff55c5b-lv4vw latest-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:24:48.976: INFO: Mar 21 23:24:50.062: INFO: The status of Pod coredns-74ff55c5b-kcjgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:50.062: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (23 seconds elapsed) Mar 21 23:24:50.062: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Mar 21 23:24:50.062: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:24:50.062: INFO: coredns-74ff55c5b-kcjgk latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:24:50.062: INFO: Mar 21 23:24:52.328: INFO: The status of Pod coredns-74ff55c5b-kcjgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:24:52.328: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (25 seconds elapsed) Mar 21 23:24:52.328: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Mar 21 23:24:52.328: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:24:52.328: INFO: coredns-74ff55c5b-kcjgk latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:24:52.328: INFO: Mar 21 23:24:54.451: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (27 seconds elapsed) Mar 21 23:24:54.451: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 23:24:54.451: INFO: ComputeCPUMemFraction for node: latest-worker Mar 21 23:24:54.559: INFO: Pod for on the node: rally-9c46f946-o1ss669v-v2nsm, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.559: INFO: Pod for on the node: chaos-daemon-qkndt, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.559: INFO: Pod for on the node: kindnet-sbskd, Cpu: 100, Mem: 52428800 Mar 21 23:24:54.559: INFO: Pod for on the node: kube-proxy-5wvjm, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.559: INFO: Pod for on the node: failure-1, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.559: INFO: Pod for on the node: success, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.559: INFO: Pod for on the node: execpod7sbxc, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.559: INFO: Pod for on the node: externalip-test-2g8rm, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.559: INFO: Pod for on the node: externalip-test-ls4kf, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.559: INFO: Pod for on the node: ss2-0, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.559: INFO: Pod for on the node: ss2-1, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.559: INFO: Pod for on the node: inclusterclient, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.559: INFO: Node: latest-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 21 23:24:54.559: INFO: Node: latest-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 Mar 21 23:24:54.559: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 21 23:24:54.594: INFO: Pod for on the node: rally-9c46f946-o1ss669v-pd6fj, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.594: INFO: Pod for on the node: chaos-controller-manager-69c479c674-hcpp6, Cpu: 25, Mem: 268435456 Mar 21 23:24:54.594: INFO: Pod for on the node: chaos-daemon-gfm87, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.594: INFO: Pod for on the node: liveness-exec, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.594: INFO: Pod for on the node: liveness-http, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.594: INFO: Pod for on the node: coredns-74ff55c5b-kcjgk, Cpu: 100, Mem: 73400320 Mar 21 23:24:54.594: INFO: Pod for on the node: kindnet-lhbxs, Cpu: 100, Mem: 52428800 Mar 21 23:24:54.594: INFO: Pod for on the node: kube-proxy-7q92q, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.594: INFO: Pod for on the node: failure-2, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.594: INFO: Pod for on the node: httpd, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.594: INFO: Pod for on the node: ss2-2, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.594: INFO: Node: latest-worker2, totalRequestedCPUResource: 325, cpuAllocatableMil: 16000, cpuFraction: 0.0203125 Mar 21 23:24:54.594: INFO: Node: latest-worker2, totalRequestedMemResource: 499122176, memAllocatableVal: 134922104832, memFraction: 0.0036993358250783917 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:262 Mar 21 23:24:54.594: INFO: ComputeCPUMemFraction for node: latest-worker Mar 21 23:24:54.721: INFO: Pod for on the node: rally-9c46f946-o1ss669v-v2nsm, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.721: INFO: Pod for on the node: chaos-daemon-qkndt, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.721: INFO: Pod for on the node: kindnet-sbskd, Cpu: 100, Mem: 52428800 Mar 21 23:24:54.721: INFO: Pod for on the node: kube-proxy-5wvjm, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.721: INFO: Pod for on the node: failure-1, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.721: INFO: Pod for on the node: success, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.721: INFO: Pod for on the node: execpod7sbxc, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.721: INFO: Pod for on the node: externalip-test-2g8rm, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.721: INFO: Pod for on the node: externalip-test-ls4kf, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.721: INFO: Pod for on the node: ss2-0, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.721: INFO: Pod for on the node: ss2-1, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.721: INFO: Pod for on the node: inclusterclient, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.721: INFO: Node: latest-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 21 23:24:54.721: INFO: Node: latest-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 Mar 21 23:24:54.721: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 21 23:24:54.821: INFO: Pod for on the node: rally-9c46f946-o1ss669v-pd6fj, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.821: INFO: Pod for on the node: chaos-controller-manager-69c479c674-hcpp6, Cpu: 25, Mem: 268435456 Mar 21 23:24:54.822: INFO: Pod for on the node: chaos-daemon-gfm87, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.822: INFO: Pod for on the node: liveness-exec, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.822: INFO: Pod for on the node: liveness-http, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.822: INFO: Pod for on the node: coredns-74ff55c5b-kcjgk, Cpu: 100, Mem: 73400320 Mar 21 23:24:54.822: INFO: Pod for on the node: kindnet-lhbxs, Cpu: 100, Mem: 52428800 Mar 21 23:24:54.822: INFO: Pod for on the node: kube-proxy-7q92q, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.822: INFO: Pod for on the node: failure-2, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.822: INFO: Pod for on the node: httpd, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.822: INFO: Pod for on the node: ss2-2, Cpu: 100, Mem: 209715200 Mar 21 23:24:54.822: INFO: Node: latest-worker2, totalRequestedCPUResource: 325, cpuAllocatableMil: 16000, cpuFraction: 0.0203125 Mar 21 23:24:54.822: INFO: Node: latest-worker2, totalRequestedMemResource: 499122176, memAllocatableVal: 134922104832, memFraction: 0.0036993358250783917 Mar 21 23:24:54.910: INFO: Waiting for running... Mar 21 23:24:59.984: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Mar 21 23:25:05.036: INFO: ComputeCPUMemFraction for node: latest-worker Mar 21 23:25:05.362: INFO: Pod for on the node: rally-9c46f946-o1ss669v-v2nsm, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.362: INFO: Pod for on the node: chaos-daemon-qkndt, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.362: INFO: Pod for on the node: kindnet-sbskd, Cpu: 100, Mem: 52428800 Mar 21 23:25:05.362: INFO: Pod for on the node: kube-proxy-5wvjm, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.362: INFO: Pod for on the node: failure-1, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.362: INFO: Pod for on the node: success, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.362: INFO: Pod for on the node: 938e7ba0-df0c-4116-8758-a66affc7deba-0, Cpu: 7800, Mem: 67316348928 Mar 21 23:25:05.362: INFO: Pod for on the node: execpod7sbxc, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.362: INFO: Pod for on the node: externalip-test-2g8rm, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.362: INFO: Pod for on the node: externalip-test-ls4kf, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.362: INFO: Pod for on the node: ss2-0, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.362: INFO: Pod for on the node: ss2-1, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.362: INFO: Pod for on the node: inclusterclient, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.362: INFO: Node: latest-worker, totalRequestedCPUResource: 8000, cpuAllocatableMil: 16000, cpuFraction: 0.5 Mar 21 23:25:05.362: INFO: Node: latest-worker, totalRequestedMemResource: 67473635328, memAllocatableVal: 134922104832, memFraction: 0.5000932605670187 STEP: Compute Cpu, Mem Fraction after create balanced pods. Mar 21 23:25:05.362: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 21 23:25:05.620: INFO: Pod for on the node: rally-9c46f946-o1ss669v-pd6fj, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.620: INFO: Pod for on the node: chaos-controller-manager-69c479c674-hcpp6, Cpu: 25, Mem: 268435456 Mar 21 23:25:05.620: INFO: Pod for on the node: chaos-daemon-gfm87, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.620: INFO: Pod for on the node: liveness-exec, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.620: INFO: Pod for on the node: liveness-http, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.620: INFO: Pod for on the node: coredns-74ff55c5b-kcjgk, Cpu: 100, Mem: 73400320 Mar 21 23:25:05.620: INFO: Pod for on the node: kindnet-lhbxs, Cpu: 100, Mem: 52428800 Mar 21 23:25:05.620: INFO: Pod for on the node: kube-proxy-7q92q, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.620: INFO: Pod for on the node: failure-2, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.620: INFO: Pod for on the node: httpd, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.620: INFO: Pod for on the node: bf8053d7-f1ef-42d1-9de4-e77304564fa6-0, Cpu: 7675, Mem: 66974513152 Mar 21 23:25:05.620: INFO: Pod for on the node: ss2-2, Cpu: 100, Mem: 209715200 Mar 21 23:25:05.620: INFO: Node: latest-worker2, totalRequestedCPUResource: 8000, cpuAllocatableMil: 16000, cpuFraction: 0.5 Mar 21 23:25:05.620: INFO: Node: latest-worker2, totalRequestedMemResource: 67473635328, memAllocatableVal: 134922104832, memFraction: 0.5000932605670187 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-6270 to 1 STEP: Verify the pods should not scheduled to the node: latest-worker STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-6270, will wait for the garbage collector to delete the pods Mar 21 23:25:18.631: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 24.861261ms Mar 21 23:25:19.332: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 700.194272ms Mar 21 23:26:35.697: INFO: Failed to wait until all memory balanced pods are deleted: timed out waiting for the condition. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:26:35.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-6270" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:150 • [SLOW TEST:190.222 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:262 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":16,"completed":2,"skipped":1188,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:326 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:26:35.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 Mar 21 23:26:35.985: INFO: Waiting up to 1m0s for all nodes to be ready Mar 21 23:27:36.477: INFO: Waiting for terminating namespaces to be deleted... Mar 21 23:27:36.850: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 21 23:27:37.482: INFO: The status of Pod coredns-74ff55c5b-kcjgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:27:37.483: INFO: The status of Pod kindnet-lhbxs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:27:37.483: INFO: 11 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 21 23:27:37.483: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 23:27:37.483: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:27:37.483: INFO: coredns-74ff55c5b-kcjgk latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:19 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:19 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:27:37.483: INFO: kindnet-lhbxs latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 17:24:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:15 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:15 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 17:24:47 +0000 UTC }] Mar 21 23:27:37.483: INFO: Mar 21 23:27:40.426: INFO: The status of Pod coredns-74ff55c5b-kcjgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:27:40.426: INFO: The status of Pod kindnet-tgcxf is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:27:40.426: INFO: 11 / 13 pods in namespace 'kube-system' are running and ready (3 seconds elapsed) Mar 21 23:27:40.426: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 23:27:40.426: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:27:40.426: INFO: coredns-74ff55c5b-kcjgk latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:19 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:19 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:27:40.426: INFO: kindnet-tgcxf latest-worker2 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:37 +0000 UTC }] Mar 21 23:27:40.426: INFO: Mar 21 23:27:42.795: INFO: The status of Pod coredns-74ff55c5b-kcjgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:27:42.795: INFO: The status of Pod kindnet-tgcxf is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:27:42.795: INFO: 11 / 13 pods in namespace 'kube-system' are running and ready (5 seconds elapsed) Mar 21 23:27:42.795: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 23:27:42.795: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:27:42.795: INFO: coredns-74ff55c5b-kcjgk latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:19 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:19 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:27:42.795: INFO: kindnet-tgcxf latest-worker2 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:37 +0000 UTC }] Mar 21 23:27:42.795: INFO: Mar 21 23:27:44.453: INFO: The status of Pod coredns-74ff55c5b-kcjgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:27:44.453: INFO: The status of Pod kindnet-tgcxf is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:27:44.453: INFO: 11 / 13 pods in namespace 'kube-system' are running and ready (7 seconds elapsed) Mar 21 23:27:44.453: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 23:27:44.453: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:27:44.453: INFO: coredns-74ff55c5b-kcjgk latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:19 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:19 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:27:44.453: INFO: kindnet-tgcxf latest-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:37 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:37 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:37 +0000 UTC }] Mar 21 23:27:44.453: INFO: Mar 21 23:27:46.720: INFO: The status of Pod coredns-74ff55c5b-kcjgk is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:27:46.720: INFO: The status of Pod kindnet-tgcxf is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:27:46.720: INFO: 11 / 13 pods in namespace 'kube-system' are running and ready (9 seconds elapsed) Mar 21 23:27:46.720: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 23:27:46.720: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:27:46.720: INFO: coredns-74ff55c5b-kcjgk latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:19 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:19 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:24:39 +0000 UTC }] Mar 21 23:27:46.720: INFO: kindnet-tgcxf latest-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:37 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:37 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:37 +0000 UTC }] Mar 21 23:27:46.720: INFO: Mar 21 23:27:47.815: INFO: The status of Pod kindnet-tgcxf is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:27:47.815: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) Mar 21 23:27:47.815: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 23:27:47.815: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:27:47.815: INFO: kindnet-tgcxf latest-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:37 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:37 +0000 UTC ContainersNotReady containers with unready status: [kindnet-cni]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:27:37 +0000 UTC }] Mar 21 23:27:47.815: INFO: Mar 21 23:27:49.657: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) Mar 21 23:27:49.657: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 23:27:49.657: INFO: ComputeCPUMemFraction for node: latest-worker Mar 21 23:27:49.702: INFO: Pod for on the node: chaos-controller-manager-69c479c674-7xglh, Cpu: 25, Mem: 268435456 Mar 21 23:27:49.702: INFO: Pod for on the node: chaos-daemon-qkndt, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.702: INFO: Pod for on the node: kindnet-sbskd, Cpu: 100, Mem: 52428800 Mar 21 23:27:49.702: INFO: Pod for on the node: kube-proxy-5wvjm, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.702: INFO: Pod for on the node: agnhost-primary-6jf8p, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.702: INFO: Pod for on the node: agnhost-primary-928pz, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.702: INFO: Pod for on the node: netserver-0, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.702: INFO: Pod for on the node: pfpod, Cpu: 200, Mem: 419430400 Mar 21 23:27:49.702: INFO: Pod for on the node: ss2-0, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.702: INFO: Pod for on the node: ss2-1, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.702: INFO: Pod for on the node: inclusterclient, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.702: INFO: Node: latest-worker, totalRequestedCPUResource: 225, cpuAllocatableMil: 16000, cpuFraction: 0.0140625 Mar 21 23:27:49.702: INFO: Node: latest-worker, totalRequestedMemResource: 425721856, memAllocatableVal: 134922104832, memFraction: 0.0031553158508021576 Mar 21 23:27:49.702: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 21 23:27:49.833: INFO: Pod for on the node: startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.833: INFO: Pod for on the node: chaos-daemon-qdvm8, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.833: INFO: Pod for on the node: kindnet-tgcxf, Cpu: 100, Mem: 52428800 Mar 21 23:27:49.833: INFO: Pod for on the node: kube-proxy-7q92q, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.833: INFO: Pod for on the node: explicit-root-uid, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.833: INFO: Pod for on the node: busybox-privileged-true-2b129c89-886e-404f-b569-a0dab586a563, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.833: INFO: Pod for on the node: ss2-2, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.833: INFO: Node: latest-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 21 23:27:49.833: INFO: Node: latest-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:326 Mar 21 23:27:49.833: INFO: ComputeCPUMemFraction for node: latest-worker Mar 21 23:27:49.888: INFO: Pod for on the node: chaos-controller-manager-69c479c674-7xglh, Cpu: 25, Mem: 268435456 Mar 21 23:27:49.888: INFO: Pod for on the node: chaos-daemon-qkndt, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.888: INFO: Pod for on the node: kindnet-sbskd, Cpu: 100, Mem: 52428800 Mar 21 23:27:49.888: INFO: Pod for on the node: kube-proxy-5wvjm, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.888: INFO: Pod for on the node: agnhost-primary-6jf8p, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.888: INFO: Pod for on the node: agnhost-primary-928pz, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.888: INFO: Pod for on the node: netserver-0, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.888: INFO: Pod for on the node: pfpod, Cpu: 200, Mem: 419430400 Mar 21 23:27:49.888: INFO: Pod for on the node: ss2-0, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.888: INFO: Pod for on the node: ss2-1, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.888: INFO: Pod for on the node: inclusterclient, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.888: INFO: Node: latest-worker, totalRequestedCPUResource: 225, cpuAllocatableMil: 16000, cpuFraction: 0.0140625 Mar 21 23:27:49.888: INFO: Node: latest-worker, totalRequestedMemResource: 425721856, memAllocatableVal: 134922104832, memFraction: 0.0031553158508021576 Mar 21 23:27:49.888: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 21 23:27:49.960: INFO: Pod for on the node: startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.960: INFO: Pod for on the node: chaos-daemon-qdvm8, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.960: INFO: Pod for on the node: kindnet-tgcxf, Cpu: 100, Mem: 52428800 Mar 21 23:27:49.960: INFO: Pod for on the node: kube-proxy-7q92q, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.960: INFO: Pod for on the node: explicit-root-uid, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.960: INFO: Pod for on the node: busybox-privileged-true-2b129c89-886e-404f-b569-a0dab586a563, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.960: INFO: Pod for on the node: ss2-2, Cpu: 100, Mem: 209715200 Mar 21 23:27:49.960: INFO: Node: latest-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 21 23:27:49.960: INFO: Node: latest-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 Mar 21 23:27:50.043: INFO: Waiting for running... Mar 21 23:28:00.407: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Mar 21 23:28:05.459: INFO: ComputeCPUMemFraction for node: latest-worker Mar 21 23:28:05.538: INFO: Pod for on the node: chaos-controller-manager-69c479c674-7xglh, Cpu: 25, Mem: 268435456 Mar 21 23:28:05.538: INFO: Pod for on the node: chaos-daemon-qkndt, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.538: INFO: Pod for on the node: kindnet-sbskd, Cpu: 100, Mem: 52428800 Mar 21 23:28:05.538: INFO: Pod for on the node: kube-proxy-5wvjm, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.538: INFO: Pod for on the node: agnhost-primary-6jf8p, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.538: INFO: Pod for on the node: agnhost-primary-928pz, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.538: INFO: Pod for on the node: netserver-0, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.538: INFO: Pod for on the node: netserver-0, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.538: INFO: Pod for on the node: pfpod, Cpu: 200, Mem: 419430400 Mar 21 23:28:05.538: INFO: Pod for on the node: 756260ed-859f-4ffd-a79d-1291ff80b155-0, Cpu: 7775, Mem: 67047913472 Mar 21 23:28:05.538: INFO: Pod for on the node: ss2-0, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.538: INFO: Pod for on the node: ss2-1, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.538: INFO: Pod for on the node: inclusterclient, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.538: INFO: Node: latest-worker, totalRequestedCPUResource: 8000, cpuAllocatableMil: 16000, cpuFraction: 0.5 Mar 21 23:28:05.538: INFO: Node: latest-worker, totalRequestedMemResource: 67473635328, memAllocatableVal: 134922104832, memFraction: 0.5000932605670187 STEP: Compute Cpu, Mem Fraction after create balanced pods. Mar 21 23:28:05.538: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 21 23:28:05.618: INFO: Pod for on the node: startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.618: INFO: Pod for on the node: chaos-daemon-qdvm8, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.618: INFO: Pod for on the node: kindnet-tgcxf, Cpu: 100, Mem: 52428800 Mar 21 23:28:05.618: INFO: Pod for on the node: kube-proxy-7q92q, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.618: INFO: Pod for on the node: httpd, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.618: INFO: Pod for on the node: netserver-1, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.618: INFO: Pod for on the node: a6ad63d8-0e0c-4225-8597-3d42c6098e05-0, Cpu: 7800, Mem: 67316348928 Mar 21 23:28:05.618: INFO: Pod for on the node: explicit-root-uid, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.618: INFO: Pod for on the node: busybox-privileged-true-2b129c89-886e-404f-b569-a0dab586a563, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.618: INFO: Pod for on the node: ss2-2, Cpu: 100, Mem: 209715200 Mar 21 23:28:05.618: INFO: Node: latest-worker2, totalRequestedCPUResource: 8000, cpuAllocatableMil: 16000, cpuFraction: 0.5 Mar 21 23:28:05.618: INFO: Node: latest-worker2, totalRequestedMemResource: 67473635328, memAllocatableVal: 134922104832, memFraction: 0.5000932605670187 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-bd9fdf9d-7c9d-455f-9e7c=testing-taint-value-d2b946b2-773f-410a-9893-1b5add69b112:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-fc341121-67a9-471c-a081=testing-taint-value-d7ca9d6f-9920-4b7a-bde7-184bb24411e9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-0f105c3b-c214-4515-b613=testing-taint-value-d3c9d797-cd78-4b74-8c54-8e295e636abe:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ce7f4ae6-214e-4b10-9868=testing-taint-value-799035a1-d4aa-453e-a761-65580defaed1:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-91d96683-47f9-4583-84a2=testing-taint-value-555ec90d-1891-4d56-9011-1ee29fcbdb1f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-48435281-f3f7-4411-b6ec=testing-taint-value-48897885-a075-4779-ae91-8e25db202568:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-82137d47-58c8-4301-930a=testing-taint-value-195e2dca-bca5-4cb1-96c2-8ca10000d54e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-332f6dba-646d-45f6-a130=testing-taint-value-1cd60e68-fb3d-43d4-a3ff-97688492f3b9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-296b863e-b26a-41cb-b98c=testing-taint-value-feb156dd-3217-4d2e-897a-e380b18a090e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9aede205-f3bb-42b1-9df4=testing-taint-value-f13b2704-2569-40f2-958a-fff4fcfd23f2:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7d17a583-8a74-48c3-9f3c=testing-taint-value-cb655f72-77a7-43cf-b9f7-78b8b295b85b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-657b9e40-597c-4235-a727=testing-taint-value-7dabb2a0-5212-48b1-a3d5-e6d5df055d94:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8751acdb-5515-4c4b-b189=testing-taint-value-807d5db6-d410-4a1c-805f-48feb9ee438d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-02317e84-159e-407a-b529=testing-taint-value-7fe5244c-4cd9-49c7-8196-0d504d724bff:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4cbc6f65-29cd-46d8-82f3=testing-taint-value-f4272a75-f692-49a6-ae9f-3c84cba031b9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-cf5146e1-5e3f-4c96-8058=testing-taint-value-2d0be6e3-19b7-4ac1-994b-b94da8168300:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-32619f8e-db3b-4cac-83e9=testing-taint-value-49e6d4f6-6330-48ae-a0a8-60d24b845507:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8e6ef2a1-4579-49fb-bb68=testing-taint-value-11624828-e8fe-4aa0-a756-ffc37f94243f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2b2fc6b4-c68b-4d4a-9dea=testing-taint-value-7e92d36a-34b6-4879-9a27-96aaa5c121bd:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a7d87b1a-30a5-4aa0-a916=testing-taint-value-1994c2de-f02d-4521-a953-8c35c2ae6799:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7d17a583-8a74-48c3-9f3c=testing-taint-value-cb655f72-77a7-43cf-b9f7-78b8b295b85b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-657b9e40-597c-4235-a727=testing-taint-value-7dabb2a0-5212-48b1-a3d5-e6d5df055d94:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8751acdb-5515-4c4b-b189=testing-taint-value-807d5db6-d410-4a1c-805f-48feb9ee438d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-02317e84-159e-407a-b529=testing-taint-value-7fe5244c-4cd9-49c7-8196-0d504d724bff:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4cbc6f65-29cd-46d8-82f3=testing-taint-value-f4272a75-f692-49a6-ae9f-3c84cba031b9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-cf5146e1-5e3f-4c96-8058=testing-taint-value-2d0be6e3-19b7-4ac1-994b-b94da8168300:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-32619f8e-db3b-4cac-83e9=testing-taint-value-49e6d4f6-6330-48ae-a0a8-60d24b845507:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8e6ef2a1-4579-49fb-bb68=testing-taint-value-11624828-e8fe-4aa0-a756-ffc37f94243f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2b2fc6b4-c68b-4d4a-9dea=testing-taint-value-7e92d36a-34b6-4879-9a27-96aaa5c121bd:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a7d87b1a-30a5-4aa0-a916=testing-taint-value-1994c2de-f02d-4521-a953-8c35c2ae6799:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-bd9fdf9d-7c9d-455f-9e7c=testing-taint-value-d2b946b2-773f-410a-9893-1b5add69b112:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-fc341121-67a9-471c-a081=testing-taint-value-d7ca9d6f-9920-4b7a-bde7-184bb24411e9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-0f105c3b-c214-4515-b613=testing-taint-value-d3c9d797-cd78-4b74-8c54-8e295e636abe:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ce7f4ae6-214e-4b10-9868=testing-taint-value-799035a1-d4aa-453e-a761-65580defaed1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-91d96683-47f9-4583-84a2=testing-taint-value-555ec90d-1891-4d56-9011-1ee29fcbdb1f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-48435281-f3f7-4411-b6ec=testing-taint-value-48897885-a075-4779-ae91-8e25db202568:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-82137d47-58c8-4301-930a=testing-taint-value-195e2dca-bca5-4cb1-96c2-8ca10000d54e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-332f6dba-646d-45f6-a130=testing-taint-value-1cd60e68-fb3d-43d4-a3ff-97688492f3b9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-296b863e-b26a-41cb-b98c=testing-taint-value-feb156dd-3217-4d2e-897a-e380b18a090e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9aede205-f3bb-42b1-9df4=testing-taint-value-f13b2704-2569-40f2-958a-fff4fcfd23f2:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:28:47.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-4213" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:150 • [SLOW TEST:131.675 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:326 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":16,"completed":3,"skipped":1281,"failed":1,"failures":["[sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:403 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:28:47.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 Mar 21 23:28:48.351: INFO: Waiting up to 1m0s for all nodes to be ready Mar 21 23:29:48.870: INFO: Waiting for terminating namespaces to be deleted... Mar 21 23:29:49.044: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 21 23:29:49.804: INFO: The status of Pod coredns-74ff55c5b-24ddj is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:29:49.804: INFO: The status of Pod coredns-74ff55c5b-wsrgt is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:29:49.804: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 21 23:29:49.804: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Mar 21 23:29:49.804: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:29:49.804: INFO: coredns-74ff55c5b-24ddj latest-control-plane Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC }] Mar 21 23:29:49.804: INFO: coredns-74ff55c5b-wsrgt latest-control-plane Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC }] Mar 21 23:29:49.804: INFO: Mar 21 23:29:51.898: INFO: The status of Pod coredns-74ff55c5b-24ddj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:29:51.899: INFO: The status of Pod coredns-74ff55c5b-wsrgt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:29:51.899: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Mar 21 23:29:51.899: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Mar 21 23:29:51.899: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:29:51.899: INFO: coredns-74ff55c5b-24ddj latest-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC }] Mar 21 23:29:51.899: INFO: coredns-74ff55c5b-wsrgt latest-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC }] Mar 21 23:29:51.899: INFO: Mar 21 23:29:54.063: INFO: The status of Pod coredns-74ff55c5b-24ddj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:29:54.063: INFO: The status of Pod coredns-74ff55c5b-wsrgt is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:29:54.063: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (5 seconds elapsed) Mar 21 23:29:54.063: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Mar 21 23:29:54.063: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:29:54.063: INFO: coredns-74ff55c5b-24ddj latest-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC }] Mar 21 23:29:54.063: INFO: coredns-74ff55c5b-wsrgt latest-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC }] Mar 21 23:29:54.063: INFO: Mar 21 23:29:56.728: INFO: The status of Pod coredns-74ff55c5b-24ddj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Mar 21 23:29:56.728: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (7 seconds elapsed) Mar 21 23:29:56.728: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Mar 21 23:29:56.728: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:29:56.728: INFO: coredns-74ff55c5b-24ddj latest-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:29:45 +0000 UTC }] Mar 21 23:29:56.728: INFO: Mar 21 23:29:58.758: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (9 seconds elapsed) Mar 21 23:29:58.758: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Mar 21 23:29:58.758: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:29:58.758: INFO: Mar 21 23:30:00.373: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (11 seconds elapsed) Mar 21 23:30:00.374: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 23:30:00.374: INFO: ComputeCPUMemFraction for node: latest-worker Mar 21 23:30:00.867: INFO: Pod for on the node: chaos-controller-manager-69c479c674-7xglh, Cpu: 25, Mem: 268435456 Mar 21 23:30:00.867: INFO: Pod for on the node: chaos-daemon-qkndt, Cpu: 100, Mem: 209715200 Mar 21 23:30:00.867: INFO: Pod for on the node: e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241, Cpu: 100, Mem: 209715200 Mar 21 23:30:00.867: INFO: Pod for on the node: simpletest.rc-7wfcj, Cpu: 100, Mem: 209715200 Mar 21 23:30:00.867: INFO: Pod for on the node: kindnet-sbskd, Cpu: 100, Mem: 52428800 Mar 21 23:30:00.867: INFO: Pod for on the node: kube-proxy-5wvjm, Cpu: 100, Mem: 209715200 Mar 21 23:30:00.867: INFO: Pod for on the node: httpd, Cpu: 100, Mem: 209715200 Mar 21 23:30:00.867: INFO: Pod for on the node: ss2-0, Cpu: 100, Mem: 209715200 Mar 21 23:30:00.867: INFO: Pod for on the node: ss2-1, Cpu: 100, Mem: 209715200 Mar 21 23:30:00.867: INFO: Pod for on the node: inclusterclient, Cpu: 100, Mem: 209715200 Mar 21 23:30:00.867: INFO: Node: latest-worker, totalRequestedCPUResource: 225, cpuAllocatableMil: 16000, cpuFraction: 0.0140625 Mar 21 23:30:00.867: INFO: Node: latest-worker, totalRequestedMemResource: 425721856, memAllocatableVal: 134922104832, memFraction: 0.0031553158508021576 Mar 21 23:30:00.867: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 21 23:30:00.931: INFO: Pod for on the node: chaos-daemon-qdvm8, Cpu: 100, Mem: 209715200 Mar 21 23:30:00.931: INFO: Pod for on the node: kindnet-tgcxf, Cpu: 100, Mem: 52428800 Mar 21 23:30:00.931: INFO: Pod for on the node: kube-proxy-7q92q, Cpu: 100, Mem: 209715200 Mar 21 23:30:00.931: INFO: Pod for on the node: run-test, Cpu: 100, Mem: 209715200 Mar 21 23:30:00.931: INFO: Pod for on the node: ss2-2, Cpu: 100, Mem: 209715200 Mar 21 23:30:00.931: INFO: Pod for on the node: taint-eviction-3, Cpu: 100, Mem: 209715200 Mar 21 23:30:00.931: INFO: Node: latest-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 21 23:30:00.931: INFO: Node: latest-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:389 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. Mar 21 23:31:07.409: FAIL: Unexpected error: <*errors.errorString | 0xc00025e250>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.runPausePodWithTimeout(0xc001190580, 0x6b6ffad, 0xd, 0x0, 0x0, 0xc002958540, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:874 +0x109 k8s.io/kubernetes/test/e2e/scheduling.runPausePod(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:869 k8s.io/kubernetes/test/e2e/scheduling.runPodAndGetNodeName(0xc001190580, 0x6b6ffad, 0xd, 0x0, 0x0, 0xc002958540, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:885 +0xca k8s.io/kubernetes/test/e2e/scheduling.Get2NodesThatCanRunPod(0xc001190580, 0x31, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:975 +0x2b8 k8s.io/kubernetes/test/e2e/scheduling.glob..func6.6.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:391 +0x94 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002b5ea80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002b5ea80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002b5ea80, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:397 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "sched-priority-411". STEP: Found 6 events. Mar 21 23:31:07.415: INFO: At 2021-03-21 23:30:01 +0000 UTC - event for without-label: {default-scheduler } Scheduled: Successfully assigned sched-priority-411/without-label to latest-worker Mar 21 23:31:07.416: INFO: At 2021-03-21 23:30:03 +0000 UTC - event for without-label: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/pause:3.4.1" already present on machine Mar 21 23:31:07.416: INFO: At 2021-03-21 23:30:04 +0000 UTC - event for without-label: {kubelet latest-worker} Created: Created container without-label Mar 21 23:31:07.416: INFO: At 2021-03-21 23:30:05 +0000 UTC - event for without-label: {kubelet latest-worker} Started: Started container without-label Mar 21 23:31:07.416: INFO: At 2021-03-21 23:30:07 +0000 UTC - event for without-label: {kubelet latest-worker} Killing: Stopping container without-label Mar 21 23:31:07.416: INFO: At 2021-03-21 23:30:07 +0000 UTC - event for without-label: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) didn't match Pod's node affinity, 1 node(s) had taint {kubernetes.io/e2e-evict-taint-key: evictTaintVal}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Mar 21 23:31:07.434: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:31:07.434: INFO: without-label Pending [{PodScheduled False 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:30:07 +0000 UTC Unschedulable 0/3 nodes are available: 1 node(s) didn't match Pod's node affinity, 1 node(s) had taint {kubernetes.io/e2e-evict-taint-key: evictTaintVal}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.}] Mar 21 23:31:07.434: INFO: Mar 21 23:31:07.452: INFO: Logging node info for node latest-control-plane Mar 21 23:31:07.488: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6930471 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:29:28 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:29:28 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:29:28 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:29:28 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:31:07.489: INFO: Logging kubelet events for node latest-control-plane Mar 21 23:31:07.543: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 21 23:31:07.554: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.554: INFO: Container etcd ready: true, restart count 0 Mar 21 23:31:07.554: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.554: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:31:07.554: INFO: coredns-74ff55c5b-wsrgt started at 2021-03-21 23:29:45 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.554: INFO: Container coredns ready: true, restart count 0 Mar 21 23:31:07.554: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.554: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 21 23:31:07.554: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.554: INFO: Container kube-scheduler ready: true, restart count 0 Mar 21 23:31:07.554: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.554: INFO: Container kube-apiserver ready: true, restart count 0 Mar 21 23:31:07.554: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.554: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:31:07.554: INFO: coredns-74ff55c5b-24ddj started at 2021-03-21 23:29:45 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.554: INFO: Container coredns ready: true, restart count 0 Mar 21 23:31:07.554: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.554: INFO: Container local-path-provisioner ready: true, restart count 0 W0321 23:31:07.581688 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:31:07.702: INFO: Latency metrics for node latest-control-plane Mar 21 23:31:07.702: INFO: Logging node info for node latest-worker Mar 21 23:31:07.728: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6928506 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 19:07:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:23:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:28:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:28:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:28:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:28:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:31:07.729: INFO: Logging kubelet events for node latest-worker Mar 21 23:31:07.747: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 21 23:31:07.829: INFO: e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 started at 2021-03-21 23:29:30 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.829: INFO: Container agnhost-container ready: true, restart count 0 Mar 21 23:31:07.829: INFO: chaos-daemon-qkndt started at 2021-03-21 18:05:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.829: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:31:07.829: INFO: kindnet-sbskd started at 2021-03-21 18:05:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.829: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:31:07.829: INFO: inclusterclient started at 2021-03-21 23:22:57 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.829: INFO: Container inclusterclient ready: true, restart count 0 Mar 21 23:31:07.829: INFO: ss2-2 started at 2021-03-21 23:30:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.829: INFO: Container webserver ready: false, restart count 0 Mar 21 23:31:07.829: INFO: rally-a1d190f1-w1480vcq started at 2021-03-21 23:30:11 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.829: INFO: Container rally-a1d190f1-w1480vcq ready: false, restart count 0 Mar 21 23:31:07.829: INFO: ss2-0 started at 2021-03-21 23:30:00 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.829: INFO: Container webserver ready: true, restart count 0 Mar 21 23:31:07.829: INFO: ss2-1 started at 2021-03-21 23:28:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.829: INFO: Container webserver ready: true, restart count 0 Mar 21 23:31:07.829: INFO: chaos-controller-manager-69c479c674-7xglh started at 2021-03-21 23:27:10 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.829: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:31:07.829: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.829: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:31:07.829: INFO: httpd started at 2021-03-21 23:30:50 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:07.829: INFO: Container httpd ready: true, restart count 0 W0321 23:31:07.855725 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:31:08.243: INFO: Latency metrics for node latest-worker Mar 21 23:31:08.243: INFO: Logging node info for node latest-worker2 Mar 21 23:31:08.260: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6931008 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 18:46:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:23:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-21 23:29:52 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-21 23:29:52 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:28:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:28:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:28:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:28:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:31:08.260: INFO: Logging kubelet events for node latest-worker2 Mar 21 23:31:08.323: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 21 23:31:08.618: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:08.618: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:31:08.618: INFO: taint-eviction-3 started at 2021-03-21 23:29:52 +0000 UTC (0+1 container statuses recorded) Mar 21 23:31:08.618: INFO: Container pause ready: false, restart count 0 W0321 23:31:08.906872 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:31:09.237: INFO: Latency metrics for node latest-worker2 Mar 21 23:31:09.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-411" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:150 • Failure in Spec Setup (BeforeEach) [141.998 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:385 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:403 Mar 21 23:31:07.409: Unexpected error: <*errors.errorString | 0xc00025e250>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:874 ------------------------------ {"msg":"FAILED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":16,"completed":3,"skipped":2010,"failed":2,"failures":["[sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","[sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:31:09.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 21 23:31:09.684: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 21 23:31:09.790: INFO: Waiting for terminating namespaces to be deleted... Mar 21 23:31:09.900: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 21 23:31:09.939: INFO: rally-a1d190f1-w1480vcq from c-rally-a1d190f1-g0lnhe7s started at 2021-03-21 23:30:11 +0000 UTC (1 container statuses recorded) Mar 21 23:31:09.939: INFO: Container rally-a1d190f1-w1480vcq ready: false, restart count 0 Mar 21 23:31:09.939: INFO: chaos-controller-manager-69c479c674-7xglh from default started at 2021-03-21 23:27:10 +0000 UTC (1 container statuses recorded) Mar 21 23:31:09.939: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:31:09.939: INFO: chaos-daemon-qkndt from default started at 2021-03-21 18:05:47 +0000 UTC (1 container statuses recorded) Mar 21 23:31:09.939: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:31:09.939: INFO: e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 from dns-config-map-6822 started at 2021-03-21 23:29:30 +0000 UTC (1 container statuses recorded) Mar 21 23:31:09.939: INFO: Container agnhost-container ready: true, restart count 0 Mar 21 23:31:09.939: INFO: kindnet-sbskd from kube-system started at 2021-03-21 18:05:46 +0000 UTC (1 container statuses recorded) Mar 21 23:31:09.939: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:31:09.939: INFO: kube-proxy-5wvjm from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 21 23:31:09.939: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:31:09.939: INFO: pfpod from port-forwarding-2629 started at 2021-03-21 23:31:09 +0000 UTC (2 container statuses recorded) Mar 21 23:31:09.939: INFO: Container portforwardtester ready: false, restart count 0 Mar 21 23:31:09.939: INFO: Container readiness ready: false, restart count 0 Mar 21 23:31:09.939: INFO: ss2-0 from statefulset-7908 started at 2021-03-21 23:30:00 +0000 UTC (1 container statuses recorded) Mar 21 23:31:09.939: INFO: Container webserver ready: true, restart count 0 Mar 21 23:31:09.939: INFO: ss2-1 from statefulset-7908 started at 2021-03-21 23:28:47 +0000 UTC (1 container statuses recorded) Mar 21 23:31:09.939: INFO: Container webserver ready: true, restart count 0 Mar 21 23:31:09.939: INFO: ss2-2 from statefulset-7908 started at 2021-03-21 23:30:46 +0000 UTC (1 container statuses recorded) Mar 21 23:31:09.939: INFO: Container webserver ready: false, restart count 0 Mar 21 23:31:09.939: INFO: inclusterclient from svcaccounts-7255 started at 2021-03-21 23:22:57 +0000 UTC (1 container statuses recorded) Mar 21 23:31:09.939: INFO: Container inclusterclient ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-800de181-7a15-4fa0-bd9c-235763c91e74 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-800de181-7a15-4fa0-bd9c-235763c91e74 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-800de181-7a15-4fa0-bd9c-235763c91e74 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:31:22.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9703" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:13.589 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":16,"completed":4,"skipped":2273,"failed":2,"failures":["[sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","[sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/nvidia-gpus.go:325 [BeforeEach] [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/nvidia-gpus.go:321 Mar 21 23:31:23.061: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 run Nvidia GPU Device Plugin tests with a recreation [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/nvidia-gpus.go:325 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/nvidia-gpus.go:322 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:31:23.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 21 23:31:23.277: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 21 23:31:23.876: INFO: Waiting for terminating namespaces to be deleted... Mar 21 23:31:24.169: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 21 23:31:24.505: INFO: rally-a1d190f1-w1480vcq from c-rally-a1d190f1-g0lnhe7s started at 2021-03-21 23:30:11 +0000 UTC (1 container statuses recorded) Mar 21 23:31:24.505: INFO: Container rally-a1d190f1-w1480vcq ready: false, restart count 0 Mar 21 23:31:24.505: INFO: chaos-controller-manager-69c479c674-7xglh from default started at 2021-03-21 23:27:10 +0000 UTC (1 container statuses recorded) Mar 21 23:31:24.505: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:31:24.505: INFO: chaos-daemon-qkndt from default started at 2021-03-21 18:05:47 +0000 UTC (1 container statuses recorded) Mar 21 23:31:24.505: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:31:24.505: INFO: e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 from dns-config-map-6822 started at 2021-03-21 23:29:30 +0000 UTC (1 container statuses recorded) Mar 21 23:31:24.505: INFO: Container agnhost-container ready: true, restart count 0 Mar 21 23:31:24.505: INFO: kindnet-sbskd from kube-system started at 2021-03-21 18:05:46 +0000 UTC (1 container statuses recorded) Mar 21 23:31:24.505: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:31:24.505: INFO: kube-proxy-5wvjm from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 21 23:31:24.505: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:31:24.505: INFO: httpd from kubectl-8450 started at 2021-03-21 23:31:21 +0000 UTC (1 container statuses recorded) Mar 21 23:31:24.505: INFO: Container httpd ready: false, restart count 0 Mar 21 23:31:24.505: INFO: pfpod from port-forwarding-2629 started at 2021-03-21 23:31:09 +0000 UTC (2 container statuses recorded) Mar 21 23:31:24.505: INFO: Container portforwardtester ready: false, restart count 0 Mar 21 23:31:24.505: INFO: Container readiness ready: false, restart count 0 Mar 21 23:31:24.505: INFO: with-labels from sched-pred-9703 started at 2021-03-21 23:31:16 +0000 UTC (1 container statuses recorded) Mar 21 23:31:24.505: INFO: Container with-labels ready: true, restart count 0 Mar 21 23:31:24.505: INFO: ss2-0 from statefulset-7908 started at 2021-03-21 23:30:00 +0000 UTC (1 container statuses recorded) Mar 21 23:31:24.505: INFO: Container webserver ready: true, restart count 0 Mar 21 23:31:24.505: INFO: ss2-1 from statefulset-7908 started at 2021-03-21 23:28:47 +0000 UTC (1 container statuses recorded) Mar 21 23:31:24.505: INFO: Container webserver ready: true, restart count 0 Mar 21 23:31:24.505: INFO: ss2-2 from statefulset-7908 started at 2021-03-21 23:30:46 +0000 UTC (1 container statuses recorded) Mar 21 23:31:24.505: INFO: Container webserver ready: false, restart count 0 Mar 21 23:31:24.505: INFO: inclusterclient from svcaccounts-7255 started at 2021-03-21 23:22:57 +0000 UTC (1 container statuses recorded) Mar 21 23:31:24.505: INFO: Container inclusterclient ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-00e38089-653a-40c2-acc2-e68df549df87 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.18.0.9 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.18.0.9 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-00e38089-653a-40c2-acc2-e68df549df87 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-00e38089-653a-40c2-acc2-e68df549df87 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:31:46.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8607" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:24.128 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":16,"completed":5,"skipped":2660,"failed":2,"failures":["[sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","[sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:31:47.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 21 23:31:47.525: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 21 23:31:47.670: INFO: Waiting for terminating namespaces to be deleted... Mar 21 23:31:47.771: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 21 23:31:47.781: INFO: rally-a1d190f1-w1480vcq from c-rally-a1d190f1-g0lnhe7s started at 2021-03-21 23:30:11 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.781: INFO: Container rally-a1d190f1-w1480vcq ready: false, restart count 0 Mar 21 23:31:47.781: INFO: chaos-controller-manager-69c479c674-7xglh from default started at 2021-03-21 23:27:10 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.781: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:31:47.781: INFO: chaos-daemon-qkndt from default started at 2021-03-21 18:05:47 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.781: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:31:47.781: INFO: e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 from dns-config-map-6822 started at 2021-03-21 23:29:30 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.781: INFO: Container agnhost-container ready: true, restart count 0 Mar 21 23:31:47.781: INFO: kindnet-sbskd from kube-system started at 2021-03-21 18:05:46 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.781: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:31:47.781: INFO: kube-proxy-5wvjm from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.781: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:31:47.781: INFO: pfpod from port-forwarding-2629 started at 2021-03-21 23:31:09 +0000 UTC (2 container statuses recorded) Mar 21 23:31:47.781: INFO: Container portforwardtester ready: false, restart count 0 Mar 21 23:31:47.781: INFO: Container readiness ready: false, restart count 0 Mar 21 23:31:47.781: INFO: pfpod from port-forwarding-3033 started at 2021-03-21 23:31:39 +0000 UTC (2 container statuses recorded) Mar 21 23:31:47.781: INFO: Container portforwardtester ready: true, restart count 0 Mar 21 23:31:47.781: INFO: Container readiness ready: false, restart count 0 Mar 21 23:31:47.781: INFO: pod1 from sched-pred-8607 started at 2021-03-21 23:31:29 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.781: INFO: Container agnhost ready: true, restart count 0 Mar 21 23:31:47.781: INFO: pod2 from sched-pred-8607 started at 2021-03-21 23:31:35 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.781: INFO: Container agnhost ready: true, restart count 0 Mar 21 23:31:47.781: INFO: pod3 from sched-pred-8607 started at 2021-03-21 23:31:42 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.781: INFO: Container agnhost ready: false, restart count 0 Mar 21 23:31:47.781: INFO: with-labels from sched-pred-9703 started at 2021-03-21 23:31:16 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.781: INFO: Container with-labels ready: false, restart count 0 Mar 21 23:31:47.781: INFO: ss2-0 from statefulset-7908 started at 2021-03-21 23:30:00 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.781: INFO: Container webserver ready: true, restart count 0 Mar 21 23:31:47.781: INFO: ss2-1 from statefulset-7908 started at 2021-03-21 23:28:47 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.781: INFO: Container webserver ready: true, restart count 0 Mar 21 23:31:47.781: INFO: ss2-2 from statefulset-7908 started at 2021-03-21 23:30:46 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.781: INFO: Container webserver ready: false, restart count 0 Mar 21 23:31:47.781: INFO: inclusterclient from svcaccounts-7255 started at 2021-03-21 23:22:57 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.781: INFO: Container inclusterclient ready: true, restart count 0 Mar 21 23:31:47.781: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 21 23:31:47.815: INFO: image-pull-test0c405502-4b8d-4e19-a122-669fd565a2c0 from container-runtime-5572 started at 2021-03-21 23:31:47 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.815: INFO: Container image-pull-test ready: false, restart count 0 Mar 21 23:31:47.815: INFO: chaos-daemon-wl4fl from default started at 2021-03-21 23:31:45 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.815: INFO: Container chaos-daemon ready: false, restart count 0 Mar 21 23:31:47.815: INFO: kindnet-vhlbm from kube-system started at 2021-03-21 23:31:45 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.815: INFO: Container kindnet-cni ready: false, restart count 0 Mar 21 23:31:47.815: INFO: kube-proxy-7q92q from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 21 23:31:47.815: INFO: Container kube-proxy ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Mar 21 23:31:51.426: INFO: Pod rally-a1d190f1-w1480vcq requesting local ephemeral resource =0 on Node latest-worker Mar 21 23:31:51.426: INFO: Pod image-pull-test0c405502-4b8d-4e19-a122-669fd565a2c0 requesting local ephemeral resource =0 on Node latest-worker2 Mar 21 23:31:51.426: INFO: Pod chaos-controller-manager-69c479c674-7xglh requesting local ephemeral resource =0 on Node latest-worker Mar 21 23:31:51.426: INFO: Pod chaos-daemon-qkndt requesting local ephemeral resource =0 on Node latest-worker Mar 21 23:31:51.426: INFO: Pod chaos-daemon-wl4fl requesting local ephemeral resource =0 on Node latest-worker2 Mar 21 23:31:51.426: INFO: Pod e2e-dns-configmap-7e17b543-a2a2-458a-861e-c63260e6d241 requesting local ephemeral resource =0 on Node latest-worker Mar 21 23:31:51.426: INFO: Pod kindnet-sbskd requesting local ephemeral resource =0 on Node latest-worker Mar 21 23:31:51.426: INFO: Pod kindnet-vhlbm requesting local ephemeral resource =0 on Node latest-worker2 Mar 21 23:31:51.426: INFO: Pod kube-proxy-5wvjm requesting local ephemeral resource =0 on Node latest-worker Mar 21 23:31:51.426: INFO: Pod kube-proxy-7q92q requesting local ephemeral resource =0 on Node latest-worker2 Mar 21 23:31:51.426: INFO: Pod pfpod requesting local ephemeral resource =0 on Node latest-worker Mar 21 23:31:51.426: INFO: Pod pod1 requesting local ephemeral resource =0 on Node latest-worker Mar 21 23:31:51.426: INFO: Pod pod2 requesting local ephemeral resource =0 on Node latest-worker Mar 21 23:31:51.426: INFO: Pod pod3 requesting local ephemeral resource =0 on Node latest-worker Mar 21 23:31:51.426: INFO: Pod with-labels requesting local ephemeral resource =0 on Node latest-worker Mar 21 23:31:51.426: INFO: Pod ss2-0 requesting local ephemeral resource =0 on Node latest-worker Mar 21 23:31:51.426: INFO: Pod ss2-1 requesting local ephemeral resource =0 on Node latest-worker Mar 21 23:31:51.426: INFO: Pod ss2-2 requesting local ephemeral resource =0 on Node latest-worker Mar 21 23:31:51.426: INFO: Pod inclusterclient requesting local ephemeral resource =0 on Node latest-worker Mar 21 23:31:51.426: INFO: Using pod capacity: 235846652313 Mar 21 23:31:51.426: INFO: Node: latest-worker has local ephemeral resource allocatable: 2358466523136 Mar 21 23:31:51.426: INFO: Node: latest-worker2 has local ephemeral resource allocatable: 2358466523136 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Mar 21 23:31:57.651: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.166e7f81db0553fd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-0 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.166e7f82cb2a31dc], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-0.166e7f85a2c035fa], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.166e7f85c31104a7], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.166e7f81f25b463a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-1 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-1.166e7f850461d391], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-1.166e7f885b646b0e], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.166e7f886c6a74c1], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.166e7f8287e4e224], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-10 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.166e7f85b20194c2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-10.166e7f8868c9ebfe], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.166e7f887b302467], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.166e7f82a307d944], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-11 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-11.166e7f839a9bba61], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-11.166e7f85afab6f97], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.166e7f85f9db3750], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.166e7f82ac9adc1c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-12 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-12.166e7f8560459428], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-12.166e7f883baad5bb], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.166e7f884f18ffd7], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.166e7f82c021d828], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-13 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-13.166e7f85ad88bf76], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-13.166e7f8830ddda19], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.166e7f8846119f21], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.166e7f82d2c15802], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-14 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-14.166e7f85b0002b02], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-14.166e7f884f1919db], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.166e7f88612a8ef6], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.166e7f82d53c9923], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-15 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-15.166e7f85d99c21f7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-15.166e7f8833bfa873], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.166e7f8846128807], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.166e7f82ddeee5c9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-16 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-16.166e7f864c54d630], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-16.166e7f8845b60dea], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.166e7f885b6a5536], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.166e7f831920e8e7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-17 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-17.166e7f864b1bdc13], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-17.166e7f88587b1b30], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.166e7f886c687a2f], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.166e7f832d83c805], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-18 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-18.166e7f864a4b1493], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-18.166e7f88612a7334], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.166e7f887400d183], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.166e7f833b8eb27b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-19 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-19.166e7f865bcd64b8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-19.166e7f885b60d4e1], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.166e7f886c654d53], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.166e7f81ff496887], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-2 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.166e7f8471c76738], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-2.166e7f87934d3a70], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.166e7f87b3518c4d], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.166e7f81ff496863], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-3 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.166e7f8560efdcb3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-3.166e7f8846118688], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.166e7f885b95fa98], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.166e7f82073211e2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-4 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-4.166e7f85081aeebf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-4.166e7f885aa8dcec], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.166e7f886c67361d], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.166e7f8218a67809], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-5 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.166e7f851b93f270], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-5.166e7f879f821fad], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.166e7f87ec244f44], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.166e7f8221ab17b7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-6 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.166e7f8657f033ff], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-6.166e7f886f4f0385], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.166e7f888036940d], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.166e7f822d095fde], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-7 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.166e7f8565b9759d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-7.166e7f8853709c9f], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.166e7f8868c9ebe9], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.166e7f822e93944c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-8 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.166e7f85bb878a00], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-8.166e7f886c697b28], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.166e7f888037d6f1], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.166e7f82633b8083], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9856/overcommit-9 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-9.166e7f8341e939e8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-9.166e7f85a2b9815a], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.166e7f85c312929a], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.166e7f8b61f0faf9], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient ephemeral-storage.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:32:34.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9856" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:47.209 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":16,"completed":6,"skipped":2921,"failed":2,"failures":["[sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","[sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:32:34.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 21 23:32:34.780: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 21 23:32:34.963: INFO: Waiting for terminating namespaces to be deleted... Mar 21 23:32:35.120: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 21 23:32:35.197: INFO: chaos-controller-manager-69c479c674-7xglh from default started at 2021-03-21 23:27:10 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.197: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:32:35.197: INFO: chaos-daemon-qkndt from default started at 2021-03-21 18:05:47 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.197: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:32:35.197: INFO: kindnet-sbskd from kube-system started at 2021-03-21 18:05:46 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.197: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:32:35.198: INFO: kube-proxy-5wvjm from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.198: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:32:35.198: INFO: netserver-0 from nettest-6407 started at 2021-03-21 23:32:19 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.198: INFO: Container webserver ready: false, restart count 0 Mar 21 23:32:35.198: INFO: pfpod from port-forwarding-3033 started at 2021-03-21 23:31:39 +0000 UTC (2 container statuses recorded) Mar 21 23:32:35.198: INFO: Container portforwardtester ready: false, restart count 0 Mar 21 23:32:35.198: INFO: Container readiness ready: false, restart count 0 Mar 21 23:32:35.198: INFO: overcommit-11 from sched-pred-9856 started at 2021-03-21 23:31:55 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.198: INFO: Container overcommit-11 ready: true, restart count 0 Mar 21 23:32:35.198: INFO: overcommit-12 from sched-pred-9856 started at 2021-03-21 23:31:55 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.198: INFO: Container overcommit-12 ready: true, restart count 0 Mar 21 23:32:35.198: INFO: overcommit-13 from sched-pred-9856 started at 2021-03-21 23:31:55 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.198: INFO: Container overcommit-13 ready: true, restart count 0 Mar 21 23:32:35.198: INFO: overcommit-14 from sched-pred-9856 started at 2021-03-21 23:31:56 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.198: INFO: Container overcommit-14 ready: true, restart count 0 Mar 21 23:32:35.198: INFO: overcommit-15 from sched-pred-9856 started at 2021-03-21 23:31:56 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.198: INFO: Container overcommit-15 ready: true, restart count 0 Mar 21 23:32:35.198: INFO: overcommit-16 from sched-pred-9856 started at 2021-03-21 23:31:56 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.198: INFO: Container overcommit-16 ready: true, restart count 0 Mar 21 23:32:35.198: INFO: overcommit-17 from sched-pred-9856 started at 2021-03-21 23:31:57 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.198: INFO: Container overcommit-17 ready: true, restart count 0 Mar 21 23:32:35.198: INFO: overcommit-18 from sched-pred-9856 started at 2021-03-21 23:31:57 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.198: INFO: Container overcommit-18 ready: true, restart count 0 Mar 21 23:32:35.198: INFO: overcommit-19 from sched-pred-9856 started at 2021-03-21 23:31:57 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.198: INFO: Container overcommit-19 ready: true, restart count 0 Mar 21 23:32:35.198: INFO: overcommit-9 from sched-pred-9856 started at 2021-03-21 23:31:54 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.198: INFO: Container overcommit-9 ready: true, restart count 0 Mar 21 23:32:35.198: INFO: ss2-0 from statefulset-7908 started at 2021-03-21 23:30:00 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.198: INFO: Container webserver ready: false, restart count 0 Mar 21 23:32:35.198: INFO: inclusterclient from svcaccounts-7255 started at 2021-03-21 23:22:57 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.198: INFO: Container inclusterclient ready: false, restart count 0 Mar 21 23:32:35.198: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 21 23:32:35.307: INFO: chaos-daemon-wl4fl from default started at 2021-03-21 23:31:45 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.307: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:32:35.307: INFO: coredns-74ff55c5b-7tkvj from kube-system started at 2021-03-21 23:31:52 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.307: INFO: Container coredns ready: true, restart count 0 Mar 21 23:32:35.307: INFO: kindnet-vhlbm from kube-system started at 2021-03-21 23:31:45 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.307: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:32:35.307: INFO: kube-proxy-7q92q from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.307: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:32:35.307: INFO: netserver-1 from nettest-6407 started at 2021-03-21 23:32:19 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.307: INFO: Container webserver ready: false, restart count 0 Mar 21 23:32:35.307: INFO: agnhost-pod from node-authn-9645 started at 2021-03-21 23:32:20 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.307: INFO: Container agnhost-container ready: true, restart count 0 Mar 21 23:32:35.307: INFO: pfpod from port-forwarding-5054 started at 2021-03-21 23:32:08 +0000 UTC (2 container statuses recorded) Mar 21 23:32:35.307: INFO: Container portforwardtester ready: false, restart count 0 Mar 21 23:32:35.307: INFO: Container readiness ready: false, restart count 0 Mar 21 23:32:35.307: INFO: overcommit-0 from sched-pred-9856 started at 2021-03-21 23:31:51 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.307: INFO: Container overcommit-0 ready: true, restart count 0 Mar 21 23:32:35.307: INFO: overcommit-1 from sched-pred-9856 started at 2021-03-21 23:31:52 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.307: INFO: Container overcommit-1 ready: true, restart count 0 Mar 21 23:32:35.307: INFO: overcommit-10 from sched-pred-9856 started at 2021-03-21 23:31:54 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.307: INFO: Container overcommit-10 ready: true, restart count 0 Mar 21 23:32:35.307: INFO: overcommit-2 from sched-pred-9856 started at 2021-03-21 23:31:52 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.308: INFO: Container overcommit-2 ready: true, restart count 0 Mar 21 23:32:35.308: INFO: overcommit-3 from sched-pred-9856 started at 2021-03-21 23:31:52 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.308: INFO: Container overcommit-3 ready: true, restart count 0 Mar 21 23:32:35.308: INFO: overcommit-4 from sched-pred-9856 started at 2021-03-21 23:31:52 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.308: INFO: Container overcommit-4 ready: true, restart count 0 Mar 21 23:32:35.308: INFO: overcommit-5 from sched-pred-9856 started at 2021-03-21 23:31:53 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.308: INFO: Container overcommit-5 ready: true, restart count 0 Mar 21 23:32:35.308: INFO: overcommit-6 from sched-pred-9856 started at 2021-03-21 23:31:53 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.308: INFO: Container overcommit-6 ready: true, restart count 0 Mar 21 23:32:35.308: INFO: overcommit-7 from sched-pred-9856 started at 2021-03-21 23:31:53 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.308: INFO: Container overcommit-7 ready: true, restart count 0 Mar 21 23:32:35.308: INFO: overcommit-8 from sched-pred-9856 started at 2021-03-21 23:31:53 +0000 UTC (1 container statuses recorded) Mar 21 23:32:35.308: INFO: Container overcommit-8 ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-b02aa1cf-4528-4c82-8f9e-42f4be40166a.166e7f94d82e7f57], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-b02aa1cf-4528-4c82-8f9e-42f4be40166a.166e7f965a848c52], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5761/filler-pod-b02aa1cf-4528-4c82-8f9e-42f4be40166a to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-b02aa1cf-4528-4c82-8f9e-42f4be40166a.166e7f97028bd8c3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-b02aa1cf-4528-4c82-8f9e-42f4be40166a.166e7f97d582bd7b], Reason = [Created], Message = [Created container filler-pod-b02aa1cf-4528-4c82-8f9e-42f4be40166a] STEP: Considering event: Type = [Normal], Name = [filler-pod-b02aa1cf-4528-4c82-8f9e-42f4be40166a.166e7f97fc48730d], Reason = [Started], Message = [Started container filler-pod-b02aa1cf-4528-4c82-8f9e-42f4be40166a] STEP: Considering event: Type = [Normal], Name = [without-label.166e7f9249e82f68], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5761/without-label to latest-worker2] STEP: Considering event: Type = [Normal], Name = [without-label.166e7f92f0d24bb1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-label.166e7f93a29af981], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.166e7f93bba42940], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.166e7f945b11d790], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-podb1fda05a-299c-4f16-9177-a8f727ab9c47.166e7f98dd22091a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:33:32.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5761" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:59.938 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":16,"completed":7,"skipped":3251,"failed":2,"failures":["[sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","[sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:178 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:33:34.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 Mar 21 23:33:36.666: INFO: Waiting up to 1m0s for all nodes to be ready Mar 21 23:34:37.558: INFO: Waiting for terminating namespaces to be deleted... Mar 21 23:34:38.394: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 21 23:34:39.844: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (1 seconds elapsed) Mar 21 23:34:39.844: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 23:34:39.844: INFO: ComputeCPUMemFraction for node: latest-worker Mar 21 23:34:40.019: INFO: Pod for on the node: chaos-controller-manager-69c479c674-7xglh, Cpu: 25, Mem: 268435456 Mar 21 23:34:40.019: INFO: Pod for on the node: chaos-daemon-qkndt, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: kindnet-sbskd, Cpu: 100, Mem: 52428800 Mar 21 23:34:40.019: INFO: Pod for on the node: kube-proxy-5wvjm, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-01c4f0fe-ffb0-4c71-8ad8-9f8cff30f891, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-06027d06-559f-406e-9a73-c5c079d558ce, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-0752e077-379f-452b-9013-591bcd641f16, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-0887789a-787c-42c7-924a-0379d1cc5048, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-089059bd-6388-4ac2-844c-1197a01b79d9, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-0c418865-34e7-44dc-be69-f11a703f74c3, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-0de3e8ff-f00a-44b1-8773-991992d6574e, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-10226faf-c1ef-4afb-8869-3acfa69b0d4c, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-12e2cfdb-d9d9-4497-93d9-1b007b79f74a, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-16301c2f-3c2c-4ef1-a133-54831bfeb99c, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-1aa14b50-65ac-4dfc-b2e8-562439da5b19, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-1b06547f-b63d-48cb-81a6-84e0357c84b6, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-2767b42e-8f9d-4ace-828a-433be022f310, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-27e0cf8f-9a11-4de6-bfd7-bfe1f95ce035, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-28ac377e-f205-45f6-84cc-a27e1ecaf09c, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-2d233342-1525-4ff2-91a1-146f868c6814, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-345432ce-5c53-4b60-9412-1c9a395bdd7c, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-35525694-3aee-4acb-b97f-4a9f596b45be, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-38d4cf75-96dc-44b3-af46-c45dcd69a294, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-48047d8c-7439-4c12-a348-fbac2b516959, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-4d68a905-4947-4719-be29-1a14ef9fba30, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-4fb0106b-1599-4307-b169-23bcc9245bea, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-51a22e8d-d20e-4473-8d1c-27b7808e209e, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-5661329f-4bb7-4a6c-ab68-d9db626cedfa, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-5ce04778-d9d6-40bb-abf7-889fce748d3f, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-5e7513fa-2103-4670-8f8a-3c582b187dcd, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-61465ce3-09e1-450a-8a48-ae88a5206db4, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-7305e400-cb10-411c-8af9-a06c69469066, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-77aaf081-f268-403e-8d65-f0d09e5d6d75, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.019: INFO: Pod for on the node: pod-78c5a3fc-0f8e-4397-b89a-f3cc8a9a39ca, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-796dc1f8-8fa6-4543-9740-22ac4901b1e4, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-8d95713e-1a47-404b-a862-abfdb60ede75, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-98e38606-15d4-41e4-aa7f-32b8916dbbea, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-9adb5442-3447-434c-b688-f79175318527, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-a37f3319-bcc9-406f-bda3-398a3e25f577, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-b3b3ce98-d5c7-475d-87a1-c6ed5076711e, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-b5316c30-c9d9-4a5c-9f19-a25490f03e93, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-b9e15f19-8ac3-4cd8-a8bc-5f1bbf017676, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-c0118f98-f14e-4bf4-9d84-6fe0aca139e5, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-c5af9948-3c0c-4884-adba-20f865d330c1, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-d4c825ca-2f98-4a30-8325-8910ea310f21, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-da4759bd-d4df-401a-a992-7300350783fe, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-e5d4c0fe-9fd2-48aa-9b57-7dc48003ac6a, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-e65d8500-8f75-47cf-8af5-242d346004d5, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-eb4089c1-5934-476b-9bef-fb24fe87187a, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-ebd1422a-1278-498a-ab80-b9faa6216c77, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-ec8e9cdd-52f5-441a-91cb-86be089e2481, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-ed9a418c-2df1-48f0-b932-e010172a8189, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-f1ba7a17-bbf1-4b9f-9e1d-ba6627d83f20, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pod-f4a4a7d1-da33-48cd-a596-b33e45b2af2b, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: pfpod, Cpu: 200, Mem: 419430400 Mar 21 23:34:40.020: INFO: Pod for on the node: up-down-1-d658f, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: up-down-2-8dhrn, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: up-down-2-ll5sq, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: up-down-2-t52rr, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: ss2-0, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Pod for on the node: ss2-1, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.020: INFO: Node: latest-worker, totalRequestedCPUResource: 225, cpuAllocatableMil: 16000, cpuFraction: 0.0140625 Mar 21 23:34:40.020: INFO: Node: latest-worker, totalRequestedMemResource: 425721856, memAllocatableVal: 134922104832, memFraction: 0.0031553158508021576 Mar 21 23:34:40.020: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 21 23:34:40.172: INFO: Pod for on the node: pod83bee412-967d-4c68-b591-51659e01a21c, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.172: INFO: Pod for on the node: chaos-daemon-wl4fl, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.172: INFO: Pod for on the node: coredns-74ff55c5b-7tkvj, Cpu: 100, Mem: 73400320 Mar 21 23:34:40.172: INFO: Pod for on the node: kindnet-vhlbm, Cpu: 100, Mem: 52428800 Mar 21 23:34:40.172: INFO: Pod for on the node: kube-proxy-7q92q, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.172: INFO: Pod for on the node: pod-submit-status-0-0, Cpu: 5, Mem: 10485760 Mar 21 23:34:40.172: INFO: Pod for on the node: pod-submit-status-1-0, Cpu: 5, Mem: 10485760 Mar 21 23:34:40.172: INFO: Pod for on the node: pod-submit-status-2-0, Cpu: 5, Mem: 10485760 Mar 21 23:34:40.172: INFO: Pod for on the node: up-down-1-glsqr, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.172: INFO: Pod for on the node: up-down-1-nds9x, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.172: INFO: Pod for on the node: verify-service-up-exec-pod-t8kc7, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.172: INFO: Pod for on the node: verify-service-up-host-exec-pod, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.172: INFO: Pod for on the node: ss2-2, Cpu: 100, Mem: 209715200 Mar 21 23:34:40.172: INFO: Node: latest-worker2, totalRequestedCPUResource: 315, cpuAllocatableMil: 16000, cpuFraction: 0.0196875 Mar 21 23:34:40.172: INFO: Node: latest-worker2, totalRequestedMemResource: 262144000, memAllocatableVal: 134922104832, memFraction: 0.0019429284795579788 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:178 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Mar 21 23:34:58.763: INFO: ComputeCPUMemFraction for node: latest-worker Mar 21 23:34:59.322: INFO: Pod for on the node: chaos-controller-manager-69c479c674-7xglh, Cpu: 25, Mem: 268435456 Mar 21 23:34:59.322: INFO: Pod for on the node: chaos-daemon-qkndt, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: kindnet-sbskd, Cpu: 100, Mem: 52428800 Mar 21 23:34:59.322: INFO: Pod for on the node: kube-proxy-5wvjm, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-01c4f0fe-ffb0-4c71-8ad8-9f8cff30f891, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-06027d06-559f-406e-9a73-c5c079d558ce, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-0752e077-379f-452b-9013-591bcd641f16, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-0887789a-787c-42c7-924a-0379d1cc5048, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-089059bd-6388-4ac2-844c-1197a01b79d9, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-0c418865-34e7-44dc-be69-f11a703f74c3, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-0de3e8ff-f00a-44b1-8773-991992d6574e, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-10226faf-c1ef-4afb-8869-3acfa69b0d4c, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-12e2cfdb-d9d9-4497-93d9-1b007b79f74a, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-16301c2f-3c2c-4ef1-a133-54831bfeb99c, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-1aa14b50-65ac-4dfc-b2e8-562439da5b19, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-1b06547f-b63d-48cb-81a6-84e0357c84b6, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-2767b42e-8f9d-4ace-828a-433be022f310, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-27e0cf8f-9a11-4de6-bfd7-bfe1f95ce035, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-28ac377e-f205-45f6-84cc-a27e1ecaf09c, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-2d233342-1525-4ff2-91a1-146f868c6814, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-345432ce-5c53-4b60-9412-1c9a395bdd7c, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-35525694-3aee-4acb-b97f-4a9f596b45be, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-38d4cf75-96dc-44b3-af46-c45dcd69a294, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-48047d8c-7439-4c12-a348-fbac2b516959, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-4d68a905-4947-4719-be29-1a14ef9fba30, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-4fb0106b-1599-4307-b169-23bcc9245bea, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-51a22e8d-d20e-4473-8d1c-27b7808e209e, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-5661329f-4bb7-4a6c-ab68-d9db626cedfa, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-5ce04778-d9d6-40bb-abf7-889fce748d3f, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-5e7513fa-2103-4670-8f8a-3c582b187dcd, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-61465ce3-09e1-450a-8a48-ae88a5206db4, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-7305e400-cb10-411c-8af9-a06c69469066, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-77aaf081-f268-403e-8d65-f0d09e5d6d75, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-78c5a3fc-0f8e-4397-b89a-f3cc8a9a39ca, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-796dc1f8-8fa6-4543-9740-22ac4901b1e4, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-8d95713e-1a47-404b-a862-abfdb60ede75, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-98e38606-15d4-41e4-aa7f-32b8916dbbea, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-9adb5442-3447-434c-b688-f79175318527, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-a37f3319-bcc9-406f-bda3-398a3e25f577, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-b3b3ce98-d5c7-475d-87a1-c6ed5076711e, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-b5316c30-c9d9-4a5c-9f19-a25490f03e93, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-b9e15f19-8ac3-4cd8-a8bc-5f1bbf017676, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-c0118f98-f14e-4bf4-9d84-6fe0aca139e5, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-c5af9948-3c0c-4884-adba-20f865d330c1, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-d4c825ca-2f98-4a30-8325-8910ea310f21, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-da4759bd-d4df-401a-a992-7300350783fe, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-e5d4c0fe-9fd2-48aa-9b57-7dc48003ac6a, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-e65d8500-8f75-47cf-8af5-242d346004d5, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-eb4089c1-5934-476b-9bef-fb24fe87187a, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-ebd1422a-1278-498a-ab80-b9faa6216c77, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-ec8e9cdd-52f5-441a-91cb-86be089e2481, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-ed9a418c-2df1-48f0-b932-e010172a8189, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-f1ba7a17-bbf1-4b9f-9e1d-ba6627d83f20, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: pod-f4a4a7d1-da33-48cd-a596-b33e45b2af2b, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: up-down-1-d658f, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: up-down-2-8dhrn, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: up-down-2-ll5sq, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: up-down-2-t52rr, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Pod for on the node: ss2-1, Cpu: 100, Mem: 209715200 Mar 21 23:34:59.322: INFO: Node: latest-worker, totalRequestedCPUResource: 225, cpuAllocatableMil: 16000, cpuFraction: 0.0140625 Mar 21 23:34:59.322: INFO: Node: latest-worker, totalRequestedMemResource: 425721856, memAllocatableVal: 134922104832, memFraction: 0.0031553158508021576 Mar 21 23:34:59.322: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 21 23:35:00.047: INFO: Pod for on the node: pod83bee412-967d-4c68-b591-51659e01a21c, Cpu: 100, Mem: 209715200 Mar 21 23:35:00.047: INFO: Pod for on the node: chaos-daemon-wl4fl, Cpu: 100, Mem: 209715200 Mar 21 23:35:00.047: INFO: Pod for on the node: coredns-74ff55c5b-7tkvj, Cpu: 100, Mem: 73400320 Mar 21 23:35:00.047: INFO: Pod for on the node: kindnet-vhlbm, Cpu: 100, Mem: 52428800 Mar 21 23:35:00.047: INFO: Pod for on the node: kube-proxy-7q92q, Cpu: 100, Mem: 209715200 Mar 21 23:35:00.047: INFO: Pod for on the node: pod-submit-status-0-0, Cpu: 5, Mem: 10485760 Mar 21 23:35:00.047: INFO: Pod for on the node: pod-submit-status-1-0, Cpu: 5, Mem: 10485760 Mar 21 23:35:00.047: INFO: Pod for on the node: pod-submit-status-2-0, Cpu: 5, Mem: 10485760 Mar 21 23:35:00.047: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Mar 21 23:35:00.047: INFO: Pod for on the node: up-down-1-glsqr, Cpu: 100, Mem: 209715200 Mar 21 23:35:00.047: INFO: Pod for on the node: up-down-1-nds9x, Cpu: 100, Mem: 209715200 Mar 21 23:35:00.047: INFO: Pod for on the node: ss2-0, Cpu: 100, Mem: 209715200 Mar 21 23:35:00.047: INFO: Pod for on the node: ss2-2, Cpu: 100, Mem: 209715200 Mar 21 23:35:00.047: INFO: Node: latest-worker2, totalRequestedCPUResource: 315, cpuAllocatableMil: 16000, cpuFraction: 0.0196875 Mar 21 23:35:00.047: INFO: Node: latest-worker2, totalRequestedMemResource: 262144000, memAllocatableVal: 134922104832, memFraction: 0.0019429284795579788 Mar 21 23:35:00.625: INFO: Waiting for running... Mar 21 23:35:31.084: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Mar 21 23:35:41.138: INFO: ComputeCPUMemFraction for node: latest-worker Mar 21 23:35:41.593: INFO: Pod for on the node: chaos-controller-manager-69c479c674-7xglh, Cpu: 25, Mem: 268435456 Mar 21 23:35:41.593: INFO: Pod for on the node: chaos-daemon-qkndt, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: kindnet-sbskd, Cpu: 100, Mem: 52428800 Mar 21 23:35:41.593: INFO: Pod for on the node: kube-proxy-5wvjm, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-01c4f0fe-ffb0-4c71-8ad8-9f8cff30f891, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-06027d06-559f-406e-9a73-c5c079d558ce, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-0752e077-379f-452b-9013-591bcd641f16, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-0887789a-787c-42c7-924a-0379d1cc5048, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-089059bd-6388-4ac2-844c-1197a01b79d9, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-0c418865-34e7-44dc-be69-f11a703f74c3, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-0de3e8ff-f00a-44b1-8773-991992d6574e, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-10226faf-c1ef-4afb-8869-3acfa69b0d4c, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-12e2cfdb-d9d9-4497-93d9-1b007b79f74a, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-16301c2f-3c2c-4ef1-a133-54831bfeb99c, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-1aa14b50-65ac-4dfc-b2e8-562439da5b19, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-1b06547f-b63d-48cb-81a6-84e0357c84b6, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-2767b42e-8f9d-4ace-828a-433be022f310, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-27e0cf8f-9a11-4de6-bfd7-bfe1f95ce035, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-28ac377e-f205-45f6-84cc-a27e1ecaf09c, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-2d233342-1525-4ff2-91a1-146f868c6814, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-345432ce-5c53-4b60-9412-1c9a395bdd7c, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-35525694-3aee-4acb-b97f-4a9f596b45be, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-38d4cf75-96dc-44b3-af46-c45dcd69a294, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-48047d8c-7439-4c12-a348-fbac2b516959, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-4d68a905-4947-4719-be29-1a14ef9fba30, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-4fb0106b-1599-4307-b169-23bcc9245bea, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-51a22e8d-d20e-4473-8d1c-27b7808e209e, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-5661329f-4bb7-4a6c-ab68-d9db626cedfa, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-5ce04778-d9d6-40bb-abf7-889fce748d3f, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-5e7513fa-2103-4670-8f8a-3c582b187dcd, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-61465ce3-09e1-450a-8a48-ae88a5206db4, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-7305e400-cb10-411c-8af9-a06c69469066, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-77aaf081-f268-403e-8d65-f0d09e5d6d75, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-78c5a3fc-0f8e-4397-b89a-f3cc8a9a39ca, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-796dc1f8-8fa6-4543-9740-22ac4901b1e4, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-8d95713e-1a47-404b-a862-abfdb60ede75, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-98e38606-15d4-41e4-aa7f-32b8916dbbea, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-9adb5442-3447-434c-b688-f79175318527, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-a37f3319-bcc9-406f-bda3-398a3e25f577, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-b3b3ce98-d5c7-475d-87a1-c6ed5076711e, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-b5316c30-c9d9-4a5c-9f19-a25490f03e93, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-b9e15f19-8ac3-4cd8-a8bc-5f1bbf017676, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-c0118f98-f14e-4bf4-9d84-6fe0aca139e5, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-c5af9948-3c0c-4884-adba-20f865d330c1, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-d4c825ca-2f98-4a30-8325-8910ea310f21, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-da4759bd-d4df-401a-a992-7300350783fe, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-e5d4c0fe-9fd2-48aa-9b57-7dc48003ac6a, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-e65d8500-8f75-47cf-8af5-242d346004d5, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-eb4089c1-5934-476b-9bef-fb24fe87187a, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-ebd1422a-1278-498a-ab80-b9faa6216c77, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-ec8e9cdd-52f5-441a-91cb-86be089e2481, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-ed9a418c-2df1-48f0-b932-e010172a8189, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-f1ba7a17-bbf1-4b9f-9e1d-ba6627d83f20, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: pod-f4a4a7d1-da33-48cd-a596-b33e45b2af2b, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: 1b2cb9b6-f96d-4d21-b0a7-635a158cf8c6-0, Cpu: 9375, Mem: 80540123955 Mar 21 23:35:41.593: INFO: Pod for on the node: up-down-2-8dhrn, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: up-down-2-ll5sq, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Pod for on the node: up-down-2-t52rr, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.593: INFO: Node: latest-worker, totalRequestedCPUResource: 9600, cpuAllocatableMil: 16000, cpuFraction: 0.6 Mar 21 23:35:41.593: INFO: Node: latest-worker, totalRequestedMemResource: 80965845811, memAllocatableVal: 134922104832, memFraction: 0.6000932605655365 STEP: Compute Cpu, Mem Fraction after create balanced pods. Mar 21 23:35:41.593: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 21 23:35:41.950: INFO: Pod for on the node: rally-eaeef54f-9ehtga3e, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.950: INFO: Pod for on the node: csi-mockplugin-0, Cpu: 300, Mem: 629145600 Mar 21 23:35:41.950: INFO: Pod for on the node: csi-mockplugin-attacher-0, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.950: INFO: Pod for on the node: csi-mockplugin-resizer-0, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.950: INFO: Pod for on the node: chaos-daemon-wl4fl, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.950: INFO: Pod for on the node: coredns-74ff55c5b-7tkvj, Cpu: 100, Mem: 73400320 Mar 21 23:35:41.950: INFO: Pod for on the node: kindnet-vhlbm, Cpu: 100, Mem: 52428800 Mar 21 23:35:41.950: INFO: Pod for on the node: kube-proxy-7q92q, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.950: INFO: Pod for on the node: pod-submit-status-0-1, Cpu: 5, Mem: 10485760 Mar 21 23:35:41.950: INFO: Pod for on the node: pod-submit-status-1-1, Cpu: 5, Mem: 10485760 Mar 21 23:35:41.950: INFO: Pod for on the node: pod-submit-status-2-1, Cpu: 5, Mem: 10485760 Mar 21 23:35:41.950: INFO: Pod for on the node: 61f3135a-a2b8-4910-bc7a-85f6f086c73f-0, Cpu: 9285, Mem: 80703701811 Mar 21 23:35:41.950: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.950: INFO: Pod for on the node: verify-service-up-host-exec-pod, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.950: INFO: Pod for on the node: ss2-0, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.950: INFO: Pod for on the node: ss2-1, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.950: INFO: Pod for on the node: ss2-2, Cpu: 100, Mem: 209715200 Mar 21 23:35:41.950: INFO: Node: latest-worker2, totalRequestedCPUResource: 9600, cpuAllocatableMil: 16000, cpuFraction: 0.6 Mar 21 23:35:41.950: INFO: Node: latest-worker2, totalRequestedMemResource: 80965845811, memAllocatableVal: 134922104832, memFraction: 0.6000932605655365 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:36:20.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7863" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:150 • [SLOW TEST:167.441 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:178 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":16,"completed":8,"skipped":3753,"failed":2,"failures":["[sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","[sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:76 [BeforeEach] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:36:21.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename multi-az STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:47 Mar 21 23:36:24.413: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:36:24.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-3370" for this suite. [AfterEach] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:67 S [SKIPPING] in Spec Setup (BeforeEach) [3.413 seconds] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should spread the pods of a replication controller across zones [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:76 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:48 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:36:25.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 21 23:36:26.820: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 21 23:36:27.194: INFO: Waiting for terminating namespaces to be deleted... Mar 21 23:36:27.523: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 21 23:36:28.036: INFO: chaos-controller-manager-69c479c674-7xglh from default started at 2021-03-21 23:27:10 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:36:28.037: INFO: chaos-daemon-qkndt from default started at 2021-03-21 18:05:47 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:36:28.037: INFO: kindnet-sbskd from kube-system started at 2021-03-21 18:05:46 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:36:28.037: INFO: kube-proxy-5wvjm from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-01c4f0fe-ffb0-4c71-8ad8-9f8cff30f891 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:00 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-06027d06-559f-406e-9a73-c5c079d558ce from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:04 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-0752e077-379f-452b-9013-591bcd641f16 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:04 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:28.037: INFO: pod-0887789a-787c-42c7-924a-0379d1cc5048 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:28.037: INFO: pod-089059bd-6388-4ac2-844c-1197a01b79d9 from persistent-local-volumes-test-8193 started at 2021-03-21 23:33:59 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-0c418865-34e7-44dc-be69-f11a703f74c3 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:03 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-0de3e8ff-f00a-44b1-8773-991992d6574e from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-10226faf-c1ef-4afb-8869-3acfa69b0d4c from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-12e2cfdb-d9d9-4497-93d9-1b007b79f74a from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:05 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-16301c2f-3c2c-4ef1-a133-54831bfeb99c from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:03 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-1aa14b50-65ac-4dfc-b2e8-562439da5b19 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-1b06547f-b63d-48cb-81a6-84e0357c84b6 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-2767b42e-8f9d-4ace-828a-433be022f310 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-27e0cf8f-9a11-4de6-bfd7-bfe1f95ce035 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-28ac377e-f205-45f6-84cc-a27e1ecaf09c from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:06 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-2d233342-1525-4ff2-91a1-146f868c6814 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-345432ce-5c53-4b60-9412-1c9a395bdd7c from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-35525694-3aee-4acb-b97f-4a9f596b45be from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-38d4cf75-96dc-44b3-af46-c45dcd69a294 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-48047d8c-7439-4c12-a348-fbac2b516959 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-4d68a905-4947-4719-be29-1a14ef9fba30 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:03 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-4fb0106b-1599-4307-b169-23bcc9245bea from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-51a22e8d-d20e-4473-8d1c-27b7808e209e from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.037: INFO: pod-5661329f-4bb7-4a6c-ab68-d9db626cedfa from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:00 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.037: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-5ce04778-d9d6-40bb-abf7-889fce748d3f from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-5e7513fa-2103-4670-8f8a-3c582b187dcd from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:06 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-61465ce3-09e1-450a-8a48-ae88a5206db4 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-7305e400-cb10-411c-8af9-a06c69469066 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:05 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-77aaf081-f268-403e-8d65-f0d09e5d6d75 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-78c5a3fc-0f8e-4397-b89a-f3cc8a9a39ca from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-796dc1f8-8fa6-4543-9740-22ac4901b1e4 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-8d95713e-1a47-404b-a862-abfdb60ede75 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-98e38606-15d4-41e4-aa7f-32b8916dbbea from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-9adb5442-3447-434c-b688-f79175318527 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-a37f3319-bcc9-406f-bda3-398a3e25f577 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-b3b3ce98-d5c7-475d-87a1-c6ed5076711e from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-b5316c30-c9d9-4a5c-9f19-a25490f03e93 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:06 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-b9e15f19-8ac3-4cd8-a8bc-5f1bbf017676 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-c0118f98-f14e-4bf4-9d84-6fe0aca139e5 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:03 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-c5af9948-3c0c-4884-adba-20f865d330c1 from persistent-local-volumes-test-8193 started at 2021-03-21 23:33:59 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-d4c825ca-2f98-4a30-8325-8910ea310f21 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:00 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-da4759bd-d4df-401a-a992-7300350783fe from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:06 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-e5d4c0fe-9fd2-48aa-9b57-7dc48003ac6a from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-e65d8500-8f75-47cf-8af5-242d346004d5 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-eb4089c1-5934-476b-9bef-fb24fe87187a from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-ebd1422a-1278-498a-ab80-b9faa6216c77 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:00 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-ec8e9cdd-52f5-441a-91cb-86be089e2481 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-ed9a418c-2df1-48f0-b932-e010172a8189 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-f1ba7a17-bbf1-4b9f-9e1d-ba6627d83f20 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-f4a4a7d1-da33-48cd-a596-b33e45b2af2b from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:28.038: INFO: pod-submit-status-1-2 from pods-4168 started at 2021-03-21 23:36:17 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container busybox ready: false, restart count 0 Mar 21 23:36:28.038: INFO: pod-with-pod-antiaffinity from sched-priority-7863 started at 2021-03-21 23:35:42 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.038: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 Mar 21 23:36:28.039: INFO: up-down-2-8dhrn from services-3785 started at 2021-03-21 23:33:40 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.039: INFO: Container up-down-2 ready: true, restart count 0 Mar 21 23:36:28.039: INFO: up-down-2-ll5sq from services-3785 started at 2021-03-21 23:33:40 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.039: INFO: Container up-down-2 ready: true, restart count 0 Mar 21 23:36:28.039: INFO: up-down-2-t52rr from services-3785 started at 2021-03-21 23:33:40 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.039: INFO: Container up-down-2 ready: true, restart count 0 Mar 21 23:36:28.039: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 21 23:36:28.539: INFO: csi-mockplugin-0 from csi-mock-volumes-9492-9274 started at 2021-03-21 23:35:39 +0000 UTC (3 container statuses recorded) Mar 21 23:36:28.539: INFO: Container csi-provisioner ready: true, restart count 0 Mar 21 23:36:28.539: INFO: Container driver-registrar ready: true, restart count 0 Mar 21 23:36:28.539: INFO: Container mock ready: true, restart count 0 Mar 21 23:36:28.539: INFO: csi-mockplugin-attacher-0 from csi-mock-volumes-9492-9274 started at 2021-03-21 23:35:39 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container csi-attacher ready: true, restart count 0 Mar 21 23:36:28.539: INFO: csi-mockplugin-resizer-0 from csi-mock-volumes-9492-9274 started at 2021-03-21 23:35:39 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container csi-resizer ready: true, restart count 0 Mar 21 23:36:28.539: INFO: pvc-volume-tester-p8sn5 from csi-mock-volumes-9492 started at 2021-03-21 23:35:59 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container volume-tester ready: false, restart count 0 Mar 21 23:36:28.539: INFO: chaos-daemon-wl4fl from default started at 2021-03-21 23:31:45 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:36:28.539: INFO: coredns-74ff55c5b-7tkvj from kube-system started at 2021-03-21 23:31:52 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container coredns ready: true, restart count 0 Mar 21 23:36:28.539: INFO: kindnet-vhlbm from kube-system started at 2021-03-21 23:31:45 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:36:28.539: INFO: kube-proxy-7q92q from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:36:28.539: INFO: pod-submit-status-0-2 from pods-4168 started at 2021-03-21 23:36:15 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container busybox ready: false, restart count 0 Mar 21 23:36:28.539: INFO: pod-submit-status-2-2 from pods-4168 started at 2021-03-21 23:36:20 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container busybox ready: false, restart count 0 Mar 21 23:36:28.539: INFO: pod-with-label-security-s1 from sched-priority-7863 started at 2021-03-21 23:34:42 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 Mar 21 23:36:28.539: INFO: up-down-3-9z77c from services-3785 started at 2021-03-21 23:35:52 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container up-down-3 ready: true, restart count 0 Mar 21 23:36:28.539: INFO: up-down-3-x4bt5 from services-3785 started at 2021-03-21 23:35:52 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container up-down-3 ready: true, restart count 0 Mar 21 23:36:28.539: INFO: up-down-3-zkvdj from services-3785 started at 2021-03-21 23:35:52 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container up-down-3 ready: true, restart count 0 Mar 21 23:36:28.539: INFO: verify-service-up-exec-pod-8xqx7 from services-3785 started at (0 container statuses recorded) Mar 21 23:36:28.539: INFO: verify-service-up-host-exec-pod from services-3785 started at 2021-03-21 23:36:11 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container agnhost-container ready: true, restart count 0 Mar 21 23:36:28.539: INFO: ss2-0 from statefulset-612 started at 2021-03-21 23:36:16 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container webserver ready: true, restart count 0 Mar 21 23:36:28.539: INFO: ss2-1 from statefulset-612 started at 2021-03-21 23:35:36 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container webserver ready: true, restart count 0 Mar 21 23:36:28.539: INFO: ss2-2 from statefulset-612 started at 2021-03-21 23:34:54 +0000 UTC (1 container statuses recorded) Mar 21 23:36:28.539: INFO: Container webserver ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.166e7fc2a76411f2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:36:31.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-42" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.583 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":16,"completed":9,"skipped":4485,"failed":2,"failures":["[sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","[sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:36:31.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 21 23:36:33.419: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 21 23:36:33.693: INFO: Waiting for terminating namespaces to be deleted... Mar 21 23:36:33.712: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 21 23:36:34.148: INFO: chaos-controller-manager-69c479c674-7xglh from default started at 2021-03-21 23:27:10 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:36:34.148: INFO: chaos-daemon-qkndt from default started at 2021-03-21 18:05:47 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:36:34.148: INFO: kindnet-sbskd from kube-system started at 2021-03-21 18:05:46 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:36:34.148: INFO: kube-proxy-5wvjm from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-01c4f0fe-ffb0-4c71-8ad8-9f8cff30f891 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:00 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:34.148: INFO: pod-06027d06-559f-406e-9a73-c5c079d558ce from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:04 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:34.148: INFO: pod-0752e077-379f-452b-9013-591bcd641f16 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:04 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:34.148: INFO: pod-0887789a-787c-42c7-924a-0379d1cc5048 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:34.148: INFO: pod-089059bd-6388-4ac2-844c-1197a01b79d9 from persistent-local-volumes-test-8193 started at 2021-03-21 23:33:59 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-0c418865-34e7-44dc-be69-f11a703f74c3 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:03 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-0de3e8ff-f00a-44b1-8773-991992d6574e from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-10226faf-c1ef-4afb-8869-3acfa69b0d4c from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-12e2cfdb-d9d9-4497-93d9-1b007b79f74a from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:05 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-16301c2f-3c2c-4ef1-a133-54831bfeb99c from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:03 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-1aa14b50-65ac-4dfc-b2e8-562439da5b19 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-1b06547f-b63d-48cb-81a6-84e0357c84b6 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-2767b42e-8f9d-4ace-828a-433be022f310 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-27e0cf8f-9a11-4de6-bfd7-bfe1f95ce035 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-28ac377e-f205-45f6-84cc-a27e1ecaf09c from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:06 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-2d233342-1525-4ff2-91a1-146f868c6814 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-345432ce-5c53-4b60-9412-1c9a395bdd7c from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-35525694-3aee-4acb-b97f-4a9f596b45be from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-38d4cf75-96dc-44b3-af46-c45dcd69a294 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-48047d8c-7439-4c12-a348-fbac2b516959 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-4d68a905-4947-4719-be29-1a14ef9fba30 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:03 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-4fb0106b-1599-4307-b169-23bcc9245bea from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-51a22e8d-d20e-4473-8d1c-27b7808e209e from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-5661329f-4bb7-4a6c-ab68-d9db626cedfa from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:00 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-5ce04778-d9d6-40bb-abf7-889fce748d3f from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-5e7513fa-2103-4670-8f8a-3c582b187dcd from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:06 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-61465ce3-09e1-450a-8a48-ae88a5206db4 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-7305e400-cb10-411c-8af9-a06c69469066 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:05 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-77aaf081-f268-403e-8d65-f0d09e5d6d75 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-78c5a3fc-0f8e-4397-b89a-f3cc8a9a39ca from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-796dc1f8-8fa6-4543-9740-22ac4901b1e4 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-8d95713e-1a47-404b-a862-abfdb60ede75 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-98e38606-15d4-41e4-aa7f-32b8916dbbea from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-9adb5442-3447-434c-b688-f79175318527 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-a37f3319-bcc9-406f-bda3-398a3e25f577 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-b3b3ce98-d5c7-475d-87a1-c6ed5076711e from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-b5316c30-c9d9-4a5c-9f19-a25490f03e93 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:06 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-b9e15f19-8ac3-4cd8-a8bc-5f1bbf017676 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-c0118f98-f14e-4bf4-9d84-6fe0aca139e5 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:03 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-c5af9948-3c0c-4884-adba-20f865d330c1 from persistent-local-volumes-test-8193 started at 2021-03-21 23:33:59 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-d4c825ca-2f98-4a30-8325-8910ea310f21 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:00 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-da4759bd-d4df-401a-a992-7300350783fe from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:06 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-e5d4c0fe-9fd2-48aa-9b57-7dc48003ac6a from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-e65d8500-8f75-47cf-8af5-242d346004d5 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-eb4089c1-5934-476b-9bef-fb24fe87187a from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-ebd1422a-1278-498a-ab80-b9faa6216c77 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:00 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-ec8e9cdd-52f5-441a-91cb-86be089e2481 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.148: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.148: INFO: pod-ed9a418c-2df1-48f0-b932-e010172a8189 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.149: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.149: INFO: pod-f1ba7a17-bbf1-4b9f-9e1d-ba6627d83f20 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.149: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.149: INFO: pod-f4a4a7d1-da33-48cd-a596-b33e45b2af2b from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.149: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:34.149: INFO: pod-submit-status-1-2 from pods-4168 started at 2021-03-21 23:36:17 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.149: INFO: Container busybox ready: false, restart count 0 Mar 21 23:36:34.149: INFO: pod-with-pod-antiaffinity from sched-priority-7863 started at 2021-03-21 23:35:42 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.149: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 Mar 21 23:36:34.149: INFO: up-down-2-8dhrn from services-3785 started at 2021-03-21 23:33:40 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.149: INFO: Container up-down-2 ready: true, restart count 0 Mar 21 23:36:34.149: INFO: up-down-2-ll5sq from services-3785 started at 2021-03-21 23:33:40 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.149: INFO: Container up-down-2 ready: true, restart count 0 Mar 21 23:36:34.149: INFO: up-down-2-t52rr from services-3785 started at 2021-03-21 23:33:40 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.149: INFO: Container up-down-2 ready: true, restart count 0 Mar 21 23:36:34.149: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 21 23:36:34.318: INFO: csi-mockplugin-0 from csi-mock-volumes-9492-9274 started at 2021-03-21 23:35:39 +0000 UTC (3 container statuses recorded) Mar 21 23:36:34.318: INFO: Container csi-provisioner ready: true, restart count 0 Mar 21 23:36:34.318: INFO: Container driver-registrar ready: true, restart count 0 Mar 21 23:36:34.318: INFO: Container mock ready: true, restart count 0 Mar 21 23:36:34.318: INFO: csi-mockplugin-attacher-0 from csi-mock-volumes-9492-9274 started at 2021-03-21 23:35:39 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.318: INFO: Container csi-attacher ready: true, restart count 0 Mar 21 23:36:34.318: INFO: csi-mockplugin-resizer-0 from csi-mock-volumes-9492-9274 started at 2021-03-21 23:35:39 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.318: INFO: Container csi-resizer ready: true, restart count 0 Mar 21 23:36:34.318: INFO: pvc-volume-tester-p8sn5 from csi-mock-volumes-9492 started at 2021-03-21 23:35:59 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.318: INFO: Container volume-tester ready: true, restart count 0 Mar 21 23:36:34.319: INFO: chaos-daemon-wl4fl from default started at 2021-03-21 23:31:45 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.319: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:36:34.319: INFO: coredns-74ff55c5b-7tkvj from kube-system started at 2021-03-21 23:31:52 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.319: INFO: Container coredns ready: true, restart count 0 Mar 21 23:36:34.319: INFO: kindnet-vhlbm from kube-system started at 2021-03-21 23:31:45 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.319: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:36:34.319: INFO: kube-proxy-7q92q from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.319: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:36:34.319: INFO: pod-submit-status-0-2 from pods-4168 started at 2021-03-21 23:36:15 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.319: INFO: Container busybox ready: false, restart count 0 Mar 21 23:36:34.319: INFO: pod-submit-status-2-2 from pods-4168 started at 2021-03-21 23:36:20 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.319: INFO: Container busybox ready: false, restart count 0 Mar 21 23:36:34.319: INFO: pod-with-label-security-s1 from sched-priority-7863 started at 2021-03-21 23:34:42 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.319: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 Mar 21 23:36:34.319: INFO: up-down-3-9z77c from services-3785 started at 2021-03-21 23:35:52 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.319: INFO: Container up-down-3 ready: true, restart count 0 Mar 21 23:36:34.319: INFO: up-down-3-x4bt5 from services-3785 started at 2021-03-21 23:35:52 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.319: INFO: Container up-down-3 ready: true, restart count 0 Mar 21 23:36:34.319: INFO: up-down-3-zkvdj from services-3785 started at 2021-03-21 23:35:52 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.319: INFO: Container up-down-3 ready: true, restart count 0 Mar 21 23:36:34.319: INFO: verify-service-up-exec-pod-8xqx7 from services-3785 started at 2021-03-21 23:36:24 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.319: INFO: Container agnhost-container ready: true, restart count 0 Mar 21 23:36:34.319: INFO: ss2-0 from statefulset-612 started at 2021-03-21 23:36:16 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.319: INFO: Container webserver ready: true, restart count 0 Mar 21 23:36:34.319: INFO: ss2-1 from statefulset-612 started at 2021-03-21 23:35:36 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.319: INFO: Container webserver ready: true, restart count 0 Mar 21 23:36:34.319: INFO: ss2-2 from statefulset-612 started at 2021-03-21 23:34:54 +0000 UTC (1 container statuses recorded) Mar 21 23:36:34.319: INFO: Container webserver ready: false, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-20b0d11a-2bdc-418a-bd6d-090aa711f5fb=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-a450aa12-ea8d-4f05-b80d-60cada8879c0 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.166e7fc3befecf6b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7460/without-toleration to latest-worker2] STEP: Considering event: Type = [Normal], Name = [without-toleration.166e7fc44d9bd590], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.166e7fc4e1c61134], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.166e7fc512986bdd], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.166e7fc5c7740be7], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.166e7fc643c1a06c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity, 1 node(s) had taint {kubernetes.io/e2e-taint-key-20b0d11a-2bdc-418a-bd6d-090aa711f5fb: testing-taint-value}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.166e7fc643c1a06c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity, 1 node(s) had taint {kubernetes.io/e2e-taint-key-20b0d11a-2bdc-418a-bd6d-090aa711f5fb: testing-taint-value}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.166e7fc3befecf6b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7460/without-toleration to latest-worker2] STEP: Considering event: Type = [Normal], Name = [without-toleration.166e7fc44d9bd590], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.166e7fc4e1c61134], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.166e7fc512986bdd], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.166e7fc5c7740be7], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-20b0d11a-2bdc-418a-bd6d-090aa711f5fb=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.166e7fc6d3142805], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7460/still-no-tolerations to latest-worker2] STEP: removing the label kubernetes.io/e2e-label-key-a450aa12-ea8d-4f05-b80d-60cada8879c0 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-a450aa12-ea8d-4f05-b80d-60cada8879c0 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-20b0d11a-2bdc-418a-bd6d-090aa711f5fb=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:36:49.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7460" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:17.853 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":16,"completed":10,"skipped":4658,"failed":2,"failures":["[sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","[sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:36:49.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 21 23:36:50.860: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 21 23:36:51.047: INFO: Waiting for terminating namespaces to be deleted... Mar 21 23:36:51.070: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 21 23:36:51.129: INFO: chaos-controller-manager-69c479c674-7xglh from default started at 2021-03-21 23:27:10 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:36:51.129: INFO: chaos-daemon-qkndt from default started at 2021-03-21 18:05:47 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:36:51.129: INFO: kindnet-sbskd from kube-system started at 2021-03-21 18:05:46 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:36:51.129: INFO: kube-proxy-5wvjm from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-01c4f0fe-ffb0-4c71-8ad8-9f8cff30f891 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:00 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:51.129: INFO: pod-06027d06-559f-406e-9a73-c5c079d558ce from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:04 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:51.129: INFO: pod-0752e077-379f-452b-9013-591bcd641f16 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:04 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:51.129: INFO: pod-0887789a-787c-42c7-924a-0379d1cc5048 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:51.129: INFO: pod-089059bd-6388-4ac2-844c-1197a01b79d9 from persistent-local-volumes-test-8193 started at 2021-03-21 23:33:59 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:51.129: INFO: pod-0c418865-34e7-44dc-be69-f11a703f74c3 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:03 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:51.129: INFO: pod-0de3e8ff-f00a-44b1-8773-991992d6574e from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:51.129: INFO: pod-10226faf-c1ef-4afb-8869-3acfa69b0d4c from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:51.129: INFO: pod-12e2cfdb-d9d9-4497-93d9-1b007b79f74a from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:05 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:51.129: INFO: pod-16301c2f-3c2c-4ef1-a133-54831bfeb99c from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:03 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:51.129: INFO: pod-1aa14b50-65ac-4dfc-b2e8-562439da5b19 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:51.129: INFO: pod-1b06547f-b63d-48cb-81a6-84e0357c84b6 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:51.129: INFO: pod-2767b42e-8f9d-4ace-828a-433be022f310 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: false, restart count 0 Mar 21 23:36:51.129: INFO: pod-27e0cf8f-9a11-4de6-bfd7-bfe1f95ce035 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-28ac377e-f205-45f6-84cc-a27e1ecaf09c from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:06 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-2d233342-1525-4ff2-91a1-146f868c6814 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-345432ce-5c53-4b60-9412-1c9a395bdd7c from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-35525694-3aee-4acb-b97f-4a9f596b45be from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-38d4cf75-96dc-44b3-af46-c45dcd69a294 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-48047d8c-7439-4c12-a348-fbac2b516959 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-4d68a905-4947-4719-be29-1a14ef9fba30 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:03 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-4fb0106b-1599-4307-b169-23bcc9245bea from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-51a22e8d-d20e-4473-8d1c-27b7808e209e from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-5661329f-4bb7-4a6c-ab68-d9db626cedfa from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:00 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-5ce04778-d9d6-40bb-abf7-889fce748d3f from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-5e7513fa-2103-4670-8f8a-3c582b187dcd from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:06 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-61465ce3-09e1-450a-8a48-ae88a5206db4 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-7305e400-cb10-411c-8af9-a06c69469066 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:05 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-77aaf081-f268-403e-8d65-f0d09e5d6d75 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-78c5a3fc-0f8e-4397-b89a-f3cc8a9a39ca from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-796dc1f8-8fa6-4543-9740-22ac4901b1e4 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.129: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.129: INFO: pod-8d95713e-1a47-404b-a862-abfdb60ede75 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-98e38606-15d4-41e4-aa7f-32b8916dbbea from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-9adb5442-3447-434c-b688-f79175318527 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-a37f3319-bcc9-406f-bda3-398a3e25f577 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-b3b3ce98-d5c7-475d-87a1-c6ed5076711e from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-b5316c30-c9d9-4a5c-9f19-a25490f03e93 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:06 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-b9e15f19-8ac3-4cd8-a8bc-5f1bbf017676 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-c0118f98-f14e-4bf4-9d84-6fe0aca139e5 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:03 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-c5af9948-3c0c-4884-adba-20f865d330c1 from persistent-local-volumes-test-8193 started at 2021-03-21 23:33:59 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-d4c825ca-2f98-4a30-8325-8910ea310f21 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:00 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-da4759bd-d4df-401a-a992-7300350783fe from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:06 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-e5d4c0fe-9fd2-48aa-9b57-7dc48003ac6a from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-e65d8500-8f75-47cf-8af5-242d346004d5 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-eb4089c1-5934-476b-9bef-fb24fe87187a from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-ebd1422a-1278-498a-ab80-b9faa6216c77 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:00 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-ec8e9cdd-52f5-441a-91cb-86be089e2481 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-ed9a418c-2df1-48f0-b932-e010172a8189 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-f1ba7a17-bbf1-4b9f-9e1d-ba6627d83f20 from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:02 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-f4a4a7d1-da33-48cd-a596-b33e45b2af2b from persistent-local-volumes-test-8193 started at 2021-03-21 23:34:01 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container write-pod ready: true, restart count 0 Mar 21 23:36:51.130: INFO: pod-submit-status-1-2 from pods-4168 started at 2021-03-21 23:36:17 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container busybox ready: false, restart count 0 Mar 21 23:36:51.130: INFO: pod-with-pod-antiaffinity from sched-priority-7863 started at 2021-03-21 23:35:42 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 Mar 21 23:36:51.130: INFO: up-down-2-8dhrn from services-3785 started at 2021-03-21 23:33:40 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container up-down-2 ready: true, restart count 0 Mar 21 23:36:51.130: INFO: up-down-2-ll5sq from services-3785 started at 2021-03-21 23:33:40 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container up-down-2 ready: true, restart count 0 Mar 21 23:36:51.130: INFO: up-down-2-t52rr from services-3785 started at 2021-03-21 23:33:40 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.130: INFO: Container up-down-2 ready: true, restart count 0 Mar 21 23:36:51.130: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 21 23:36:51.605: INFO: csi-mockplugin-0 from csi-mock-volumes-9492-9274 started at 2021-03-21 23:35:39 +0000 UTC (3 container statuses recorded) Mar 21 23:36:51.605: INFO: Container csi-provisioner ready: true, restart count 0 Mar 21 23:36:51.605: INFO: Container driver-registrar ready: true, restart count 0 Mar 21 23:36:51.605: INFO: Container mock ready: true, restart count 0 Mar 21 23:36:51.605: INFO: csi-mockplugin-attacher-0 from csi-mock-volumes-9492-9274 started at 2021-03-21 23:35:39 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container csi-attacher ready: true, restart count 0 Mar 21 23:36:51.605: INFO: csi-mockplugin-resizer-0 from csi-mock-volumes-9492-9274 started at 2021-03-21 23:35:39 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container csi-resizer ready: true, restart count 0 Mar 21 23:36:51.605: INFO: pvc-volume-tester-p8sn5 from csi-mock-volumes-9492 started at 2021-03-21 23:35:59 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container volume-tester ready: false, restart count 0 Mar 21 23:36:51.605: INFO: chaos-daemon-wl4fl from default started at 2021-03-21 23:31:45 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:36:51.605: INFO: coredns-74ff55c5b-7tkvj from kube-system started at 2021-03-21 23:31:52 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container coredns ready: true, restart count 0 Mar 21 23:36:51.605: INFO: kindnet-vhlbm from kube-system started at 2021-03-21 23:31:45 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:36:51.605: INFO: kube-proxy-7q92q from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:36:51.605: INFO: pod-submit-status-0-2 from pods-4168 started at 2021-03-21 23:36:15 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container busybox ready: false, restart count 0 Mar 21 23:36:51.605: INFO: pod-submit-status-2-2 from pods-4168 started at 2021-03-21 23:36:20 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container busybox ready: false, restart count 0 Mar 21 23:36:51.605: INFO: still-no-tolerations from sched-pred-7460 started at 2021-03-21 23:36:48 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container still-no-tolerations ready: false, restart count 0 Mar 21 23:36:51.605: INFO: pod-with-label-security-s1 from sched-priority-7863 started at 2021-03-21 23:34:42 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 Mar 21 23:36:51.605: INFO: up-down-3-9z77c from services-3785 started at 2021-03-21 23:35:52 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container up-down-3 ready: true, restart count 0 Mar 21 23:36:51.605: INFO: up-down-3-x4bt5 from services-3785 started at 2021-03-21 23:35:52 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container up-down-3 ready: true, restart count 0 Mar 21 23:36:51.605: INFO: up-down-3-zkvdj from services-3785 started at 2021-03-21 23:35:52 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container up-down-3 ready: true, restart count 0 Mar 21 23:36:51.605: INFO: hairpin from services-9522 started at 2021-03-21 23:36:39 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container agnhost-container ready: true, restart count 0 Mar 21 23:36:51.605: INFO: ss2-0 from statefulset-612 started at 2021-03-21 23:36:16 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container webserver ready: true, restart count 0 Mar 21 23:36:51.605: INFO: ss2-1 from statefulset-612 started at 2021-03-21 23:35:36 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container webserver ready: true, restart count 0 Mar 21 23:36:51.605: INFO: ss2-2 from statefulset-612 started at 2021-03-21 23:34:54 +0000 UTC (1 container statuses recorded) Mar 21 23:36:51.605: INFO: Container webserver ready: false, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:37:37.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7153" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:48.446 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":16,"completed":11,"skipped":5023,"failed":2,"failures":["[sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","[sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMar 21 23:37:38.112: INFO: Running AfterSuite actions on all nodes Mar 21 23:37:38.112: INFO: Running AfterSuite actions on node 1 Mar 21 23:37:38.112: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling/junit_01.xml {"msg":"Test Suite completed","total":16,"completed":11,"skipped":5724,"failed":2,"failures":["[sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","[sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed"]} Summarizing 2 Failures: [Fail] [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:864 [Fail] [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring [BeforeEach] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:874 Ran 13 of 5737 Specs in 960.015 seconds FAIL! -- 11 Passed | 2 Failed | 0 Pending | 5724 Skipped --- FAIL: TestE2E (960.07s) FAIL