Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: 1758500651 - will randomize all specs Will run 168 of 7132 specs Running in parallel across 10 processes SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS••SSSSSSSSSSSSSSSSSSSSSS•SS ------------------------------ • [FAILED] [4.245 seconds] [sig-node] [DRA] control plane [BeforeEach] supports count/resourceclaims.resource.k8s.io ResourceQuota [ConformanceCandidate] [sig-node, DRA, ConformanceCandidate] [BeforeEach] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [It] k8s.io/kubernetes/test/e2e/dra/dra.go:2230 Timeline >> STEP: Creating a kubernetes client @ 09/22/25 00:24:13.174 I0922 00:24:13.174334 17 util.go:454] >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename dra @ 09/22/25 00:24:13.175 STEP: Waiting for a default service account to be provisioned in namespace @ 09/22/25 00:24:13.184 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 09/22/25 00:24:13.187 STEP: selecting nodes @ 09/22/25 00:24:13.191 I0922 00:24:13.292639 17 deploy.go:142] testing on nodes [latest-worker] STEP: deploying driver dra-4245.k8s.io on nodes [latest-worker] @ 09/22/25 00:24:13.293 I0922 00:24:13.296027 17 deploy.go:449] Unexpected error: <*errors.StatusError | 0xc000a588c0>: the server could not find the requested resource (post resourceslices.resource.k8s.io) { ErrStatus: code: 404 details: causes: - message: 404 page not found reason: UnexpectedServerResponse group: resource.k8s.io kind: resourceslices message: the server could not find the requested resource (post resourceslices.resource.k8s.io) metadata: {} reason: NotFound status: Failure, } [FAILED] in [BeforeEach] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:449 @ 09/22/25 00:24:13.296 I0922 00:24:13.298482 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:24:13.298591 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:24:13.298856 17 deploy.go:423] Unexpected error: delete ResourceSlices of the driver: <*errors.StatusError | 0xc000785360>: the server could not find the requested resource (delete resourceslices.resource.k8s.io) { ErrStatus: code: 404 details: causes: - message: 404 page not found reason: UnexpectedServerResponse group: resource.k8s.io kind: resourceslices message: the server could not find the requested resource (delete resourceslices.resource.k8s.io) metadata: {} reason: NotFound status: Failure, } [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:423 @ 09/22/25 00:24:13.299 STEP: Waiting for ResourceSlices of driver dra-4245.k8s.io to be removed... @ 09/22/25 00:24:13.299 [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:992 @ 09/22/25 00:24:13.305 I0922 00:24:13.307263 17 helper.go:125] Waiting up to 7m0s for all (but 0) nodes to be ready STEP: dump namespace information after failure @ 09/22/25 00:24:13.392 STEP: Collecting events from namespace "dra-4245". @ 09/22/25 00:24:13.392 STEP: Found 0 events. @ 09/22/25 00:24:13.395 I0922 00:24:13.398576 17 resource.go:151] POD NODE PHASE GRACE CONDITIONS I0922 00:24:13.398629 17 resource.go:161] I0922 00:24:13.493055 17 dump.go:109] Logging node info for node latest-control-plane I0922 00:24:13.497683 17 dump.go:114] Node Info: &Node{ObjectMeta:{latest-control-plane 4a88f68e-a233-4137-96e6-a4d02b8e46e1 5980447 0 2025-08-14 10:01:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2025-08-14 10:01:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2025-08-14 10:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2025-08-14 10:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2025-09-22 00:23:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 00:23:55 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 00:23:55 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 00:23:55 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 00:23:55 +0000 UTC,LastTransitionTime:2025-08-14 10:01:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c7fda80d8f9641588679c7e1084a869f,SystemUUID:634ef398-8dbd-484b-881c-6c5a7cef3a32,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c registry.k8s.io/pause:3.10.1],SizeBytes:320448,},ContainerImage{Names:[registry.k8s.io/pause:3.10],SizeBytes:320368,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 00:24:13.497777 17 dump.go:116] Logging kubelet events for node latest-control-plane I0922 00:24:13.501854 17 dump.go:121] Logging pods the kubelet thinks are on node latest-control-plane I0922 00:24:13.532740 17 dump.go:128] kube-system/create-loop-devs-mbrb2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.532776 17 dump.go:134] Container loopdev ready: true, restart count 0 I0922 00:24:13.532812 17 dump.go:128] kube-system/coredns-674b8bbfcf-2h8qn started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.532834 17 dump.go:134] Container coredns ready: true, restart count 0 I0922 00:24:13.532857 17 dump.go:128] kube-system/etcd-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.532877 17 dump.go:134] Container etcd ready: true, restart count 0 I0922 00:24:13.532899 17 dump.go:128] kube-system/kube-apiserver-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.532918 17 dump.go:134] Container kube-apiserver ready: true, restart count 0 I0922 00:24:13.532939 17 dump.go:128] kube-system/kube-controller-manager-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.532969 17 dump.go:134] Container kube-controller-manager ready: true, restart count 0 I0922 00:24:13.532990 17 dump.go:128] kube-system/kube-scheduler-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.533011 17 dump.go:134] Container kube-scheduler ready: true, restart count 0 I0922 00:24:13.533031 17 dump.go:128] kube-system/kindnet-qc9kz started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.533048 17 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 00:24:13.533068 17 dump.go:128] kube-system/coredns-674b8bbfcf-klpkw started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.533087 17 dump.go:134] Container coredns ready: true, restart count 0 I0922 00:24:13.533110 17 dump.go:128] local-path-storage/local-path-provisioner-7dc846544d-kfzml started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.533128 17 dump.go:134] Container local-path-provisioner ready: true, restart count 0 I0922 00:24:13.533147 17 dump.go:128] kube-system/kube-proxy-pmhrl started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.533164 17 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 00:24:13.607627 17 kubelet_metrics.go:206] Latency metrics for node latest-control-plane I0922 00:24:13.607664 17 dump.go:109] Logging node info for node latest-worker I0922 00:24:13.612052 17 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker 8c88dec8-b208-4951-9edb-9daf6e60cfed 5980510 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-15 02:53:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 00:24:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 00:24:06 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 00:24:06 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 00:24:06 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 00:24:06 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7df88837db5436386878b8262a14e32,SystemUUID:7f723078-9e20-4c29-be13-6b3e065907c6,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:c0872aae4791ff427e6eda52769afa04f17b5cf756f8267e0d52774c99d5c9de docker.io/library/docker:dind],SizeBytes:145422022,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:9d12cad5c4a1a4c1a853947ae4cbf31a860f98bb450173fee644d8e63ce6ea4d registry.k8s.io/build-image/distroless-iptables:v0.7.7],SizeBytes:11558416,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 00:24:13.612110 17 dump.go:116] Logging kubelet events for node latest-worker I0922 00:24:13.616050 17 dump.go:121] Logging pods the kubelet thinks are on node latest-worker I0922 00:24:13.640012 17 dump.go:128] services-0/webserver-pod started at 2025-09-22 00:20:13 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.640051 17 dump.go:134] Container agnhost ready: false, restart count 0 I0922 00:24:13.640075 17 dump.go:128] services-8621/webserver-pod started at 2025-09-22 00:20:25 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.640093 17 dump.go:134] Container agnhost ready: false, restart count 0 I0922 00:24:13.640113 17 dump.go:128] kube-system/kube-proxy-plcq4 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.640128 17 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 00:24:13.640147 17 dump.go:128] kube-system/kindnet-kcb5w started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.640164 17 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 00:24:13.640183 17 dump.go:128] kube-system/create-loop-devs-76ng2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.640199 17 dump.go:134] Container loopdev ready: true, restart count 0 I0922 00:24:13.640218 17 dump.go:128] kubelet-authz-test-7387/agnhost-pod-proxy started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.640235 17 dump.go:134] Container agnhost-container ready: false, restart count 0 I0922 00:24:13.640254 17 dump.go:128] dra-3384/dra-test-driver-b2v6n started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 00:24:13.640270 17 dump.go:134] Container pause ready: false, restart count 0 I0922 00:24:13.640289 17 dump.go:128] pod-resize-tests-1318/resize-test-bdd49 started at 2025-09-22 00:24:13 +0000 UTC (0+3 container statuses recorded) I0922 00:24:13.640306 17 dump.go:134] Container c1 ready: false, restart count 0 I0922 00:24:13.640323 17 dump.go:134] Container c2 ready: false, restart count 0 I0922 00:24:13.640339 17 dump.go:134] Container c3 ready: false, restart count 0 I0922 00:24:14.206519 17 kubelet_metrics.go:206] Latency metrics for node latest-worker I0922 00:24:14.206545 17 dump.go:109] Logging node info for node latest-worker2 I0922 00:24:14.211290 17 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker2 3d1774a3-1fed-4ae4-9649-579167a82d96 5980629 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-08 02:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 00:24:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 00:24:13 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 00:24:13 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 00:24:13 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 00:24:13 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:53b074234c804259ae0ff2cba6a6f050,SystemUUID:0c517cb0-5dd6-4327-a0bd-07b70f744e34,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:831644212c5bdd0b3362b5855c87b980ea39a83c9e9adcef2ce03eced99a737a docker.io/library/docker:dind],SizeBytes:148417865,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 00:24:14.211408 17 dump.go:116] Logging kubelet events for node latest-worker2 I0922 00:24:14.214525 17 dump.go:121] Logging pods the kubelet thinks are on node latest-worker2 I0922 00:24:14.235371 17 dump.go:128] pod-resize-tests-8824/resize-test-bgsjt started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 00:24:14.235399 17 dump.go:134] Container c1 ready: false, restart count 0 I0922 00:24:14.235413 17 dump.go:128] kube-system/create-loop-devs-pw6dk started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 00:24:14.235423 17 dump.go:134] Container loopdev ready: true, restart count 0 I0922 00:24:14.235444 17 dump.go:128] pod-resize-tests-508/resize-test-tfmb4 started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 00:24:14.235458 17 dump.go:134] Container c1 ready: false, restart count 0 I0922 00:24:14.235479 17 dump.go:128] container-probe-1880/probe-test-07983c2f-a15d-4dca-b4e8-8b2490190f92 started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 00:24:14.235490 17 dump.go:134] Container probe-test-07983c2f-a15d-4dca-b4e8-8b2490190f92 ready: false, restart count 0 I0922 00:24:14.235503 17 dump.go:128] kube-system/kindnet-pthkx started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:24:14.235513 17 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 00:24:14.235523 17 dump.go:128] container-lifecycle-hook-6531/pod-handle-http-request started at 2025-09-22 00:24:13 +0000 UTC (0+2 container statuses recorded) I0922 00:24:14.235533 17 dump.go:134] Container container-handle-http-request ready: false, restart count 0 I0922 00:24:14.235542 17 dump.go:134] Container container-handle-https-request ready: false, restart count 0 I0922 00:24:14.235558 17 dump.go:128] kube-system/kube-proxy-splj6 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:24:14.235568 17 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 00:24:14.235579 17 dump.go:128] sysctl-8506/sysctl-417291a6-a179-4ef0-908c-acfd036ab803 started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 00:24:14.235588 17 dump.go:134] Container test-container ready: false, restart count 0 I0922 00:24:14.235598 17 dump.go:128] kubelet-authz-test-7269/agnhost-pod-configz started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 00:24:14.235610 17 dump.go:134] Container agnhost-container ready: false, restart count 0 I0922 00:24:17.413105 17 kubelet_metrics.go:206] Latency metrics for node latest-worker2 STEP: Destroying namespace "dra-4245" for this suite. @ 09/22/25 00:24:17.413 << Timeline [FAILED] the server could not find the requested resource (post resourceslices.resource.k8s.io) In [BeforeEach] at: k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:449 @ 09/22/25 00:24:13.296 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSS•SSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SS•SSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSS•SSSSSSSSS•SSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SS ------------------------------ • [FAILED] [25.061 seconds] [sig-node] Pod InPlace Resize Container [FeatureGate:InPlacePodVerticalScaling] [Beta] [It] Burstable QoS pod with memory requests + limits - decrease memory limit [sig-node, FeatureGate:InPlacePodVerticalScaling, Beta] k8s.io/kubernetes/test/e2e/common/node/pod_resize.go:1037 Timeline >> STEP: Creating a kubernetes client @ 09/22/25 00:25:29.613 I0922 00:25:29.613761 34 util.go:454] >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pod-resize-tests @ 09/22/25 00:25:29.615 STEP: Waiting for a default service account to be provisioned in namespace @ 09/22/25 00:25:29.625 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 09/22/25 00:25:29.629 STEP: creating pod @ 09/22/25 00:25:29.701 STEP: verifying initial pod resources are as expected @ 09/22/25 00:25:41.74 STEP: verifying initial pod resize policy is as expected @ 09/22/25 00:25:41.74 STEP: verifying initial pod status resources are as expected @ 09/22/25 00:25:41.74 STEP: verifying initial cgroup config are as expected @ 09/22/25 00:25:41.741 I0922 00:25:41.741079 34 exec_util.go:63] ExecWithOptions {Command:[/bin/sh -c mount -t cgroup2] Namespace:pod-resize-tests-8665 PodName:resize-test-69fmp ContainerName:c1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} I0922 00:25:41.741104 34 exec_util.go:68] ExecWithOptions: Clientset creation I0922 00:25:41.741189 34 exec_util.go:84] ExecWithOptions: execute(https://172.30.13.90:35009/api/v1/namespaces/pod-resize-tests-8665/pods/resize-test-69fmp/exec?command=%2Fbin%2Fsh&command=-c&command=mount+-t+cgroup2&container=c1&stderr=true&stdout=true) I0922 00:25:43.948142 34 cgroups.go:375] Namespace pod-resize-tests-8665 Pod resize-test-69fmp Container c1 - looking for one of the expected cgroup values [31457280] in path /sys/fs/cgroup/memory/memory.limit_in_bytes I0922 00:25:43.948209 34 exec_util.go:63] ExecWithOptions {Command:[/bin/sh -c head -n 1 /sys/fs/cgroup/memory/memory.limit_in_bytes] Namespace:pod-resize-tests-8665 PodName:resize-test-69fmp ContainerName:c1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} I0922 00:25:43.948232 34 exec_util.go:68] ExecWithOptions: Clientset creation I0922 00:25:43.948345 34 exec_util.go:84] ExecWithOptions: execute(https://172.30.13.90:35009/api/v1/namespaces/pod-resize-tests-8665/pods/resize-test-69fmp/exec?command=%2Fbin%2Fsh&command=-c&command=head+-n+1+%2Fsys%2Ffs%2Fcgroup%2Fmemory%2Fmemory.limit_in_bytes&container=c1&stderr=true&stdout=true) I0922 00:25:46.351127 34 cgroups.go:375] Namespace pod-resize-tests-8665 Pod resize-test-69fmp Container c1 - looking for one of the expected cgroup values [3000] in path /sys/fs/cgroup/cpu/cpu.cfs_quota_us I0922 00:25:46.351200 34 exec_util.go:63] ExecWithOptions {Command:[/bin/sh -c head -n 1 /sys/fs/cgroup/cpu/cpu.cfs_quota_us] Namespace:pod-resize-tests-8665 PodName:resize-test-69fmp ContainerName:c1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} I0922 00:25:46.351224 34 exec_util.go:68] ExecWithOptions: Clientset creation I0922 00:25:46.351330 34 exec_util.go:84] ExecWithOptions: execute(https://172.30.13.90:35009/api/v1/namespaces/pod-resize-tests-8665/pods/resize-test-69fmp/exec?command=%2Fbin%2Fsh&command=-c&command=head+-n+1+%2Fsys%2Ffs%2Fcgroup%2Fcpu%2Fcpu.cfs_quota_us&container=c1&stderr=true&stdout=true) I0922 00:25:48.948043 34 cgroups.go:375] Namespace pod-resize-tests-8665 Pod resize-test-69fmp Container c1 - looking for one of the expected cgroup values [20] in path /sys/fs/cgroup/cpu/cpu.shares I0922 00:25:48.948114 34 exec_util.go:63] ExecWithOptions {Command:[/bin/sh -c head -n 1 /sys/fs/cgroup/cpu/cpu.shares] Namespace:pod-resize-tests-8665 PodName:resize-test-69fmp ContainerName:c1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} I0922 00:25:48.948143 34 exec_util.go:68] ExecWithOptions: Clientset creation I0922 00:25:48.948250 34 exec_util.go:84] ExecWithOptions: execute(https://172.30.13.90:35009/api/v1/namespaces/pod-resize-tests-8665/pods/resize-test-69fmp/exec?command=%2Fbin%2Fsh&command=-c&command=head+-n+1+%2Fsys%2Ffs%2Fcgroup%2Fcpu%2Fcpu.shares&container=c1&stderr=true&stdout=true) STEP: patching pod for resize @ 09/22/25 00:25:50.844 I0922 00:25:50.858383 34 pod_resize.go:1079] Unexpected error: failed to patch pod for resize: <*errors.StatusError | 0xc001d8fae0>: Pod "resize-test-69fmp" is invalid: spec.containers[0].resources.limits[memory]: Forbidden: memory limits cannot be decreased unless resizePolicy is RestartContainer { ErrStatus: code: 422 details: causes: - field: spec.containers[0].resources.limits[memory] message: 'Forbidden: memory limits cannot be decreased unless resizePolicy is RestartContainer' reason: FieldValueForbidden kind: Pod name: resize-test-69fmp message: 'Pod "resize-test-69fmp" is invalid: spec.containers[0].resources.limits[memory]: Forbidden: memory limits cannot be decreased unless resizePolicy is RestartContainer' metadata: {} reason: Invalid status: Failure, } [FAILED] in [It] - k8s.io/kubernetes/test/e2e/common/node/pod_resize.go:1079 @ 09/22/25 00:25:50.858 I0922 00:25:50.859621 34 helper.go:125] Waiting up to 7m0s for all (but 0) nodes to be ready STEP: dump namespace information after failure @ 09/22/25 00:25:50.864 STEP: Collecting events from namespace "pod-resize-tests-8665". @ 09/22/25 00:25:50.864 STEP: Found 4 events. @ 09/22/25 00:25:50.867 I0922 00:25:50.867580 34 dump.go:53] At 2025-09-22 00:25:29 +0000 UTC - event for resize-test-69fmp: {default-scheduler } Scheduled: Successfully assigned pod-resize-tests-8665/resize-test-69fmp to latest-worker I0922 00:25:50.867598 34 dump.go:53] At 2025-09-22 00:25:33 +0000 UTC - event for resize-test-69fmp: {kubelet latest-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.37.0-1" already present on machine I0922 00:25:50.867611 34 dump.go:53] At 2025-09-22 00:25:33 +0000 UTC - event for resize-test-69fmp: {kubelet latest-worker} Created: Created container: c1 I0922 00:25:50.867622 34 dump.go:53] At 2025-09-22 00:25:40 +0000 UTC - event for resize-test-69fmp: {kubelet latest-worker} Started: Started container c1 I0922 00:25:50.871467 34 resource.go:151] POD NODE PHASE GRACE CONDITIONS I0922 00:25:50.871569 34 resource.go:158] resize-test-69fmp latest-worker Running [{PodReadyToStartContainers 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:25:40 +0000 UTC } {Initialized 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:25:29 +0000 UTC } {Ready 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:25:40 +0000 UTC } {ContainersReady 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:25:40 +0000 UTC } {PodScheduled 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:25:29 +0000 UTC }] I0922 00:25:50.871580 34 resource.go:161] I0922 00:25:50.882898 34 dump.go:109] Logging node info for node latest-control-plane I0922 00:25:50.886918 34 dump.go:114] Node Info: &Node{ObjectMeta:{latest-control-plane 4a88f68e-a233-4137-96e6-a4d02b8e46e1 5981417 0 2025-08-14 10:01:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2025-08-14 10:01:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2025-08-14 10:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2025-08-14 10:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2025-09-22 00:25:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 00:25:27 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 00:25:27 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 00:25:27 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 00:25:27 +0000 UTC,LastTransitionTime:2025-08-14 10:01:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c7fda80d8f9641588679c7e1084a869f,SystemUUID:634ef398-8dbd-484b-881c-6c5a7cef3a32,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c registry.k8s.io/pause:3.10.1],SizeBytes:320448,},ContainerImage{Names:[registry.k8s.io/pause:3.10],SizeBytes:320368,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 00:25:50.886970 34 dump.go:116] Logging kubelet events for node latest-control-plane I0922 00:25:50.890860 34 dump.go:121] Logging pods the kubelet thinks are on node latest-control-plane I0922 00:25:50.917361 34 dump.go:128] kube-system/kindnet-qc9kz started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 00:25:50.917397 34 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 00:25:50.917421 34 dump.go:128] kube-system/coredns-674b8bbfcf-klpkw started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 00:25:50.917439 34 dump.go:134] Container coredns ready: true, restart count 0 I0922 00:25:50.917463 34 dump.go:128] local-path-storage/local-path-provisioner-7dc846544d-kfzml started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 00:25:50.917482 34 dump.go:134] Container local-path-provisioner ready: true, restart count 0 I0922 00:25:50.917501 34 dump.go:128] kube-system/kube-proxy-pmhrl started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 00:25:50.917519 34 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 00:25:50.917536 34 dump.go:128] kube-system/create-loop-devs-mbrb2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 00:25:50.917553 34 dump.go:134] Container loopdev ready: true, restart count 0 I0922 00:25:50.917576 34 dump.go:128] kube-system/coredns-674b8bbfcf-2h8qn started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 00:25:50.917593 34 dump.go:134] Container coredns ready: true, restart count 0 I0922 00:25:50.917613 34 dump.go:128] kube-system/etcd-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:25:50.917630 34 dump.go:134] Container etcd ready: true, restart count 0 I0922 00:25:50.917649 34 dump.go:128] kube-system/kube-apiserver-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:25:50.917665 34 dump.go:134] Container kube-apiserver ready: true, restart count 0 I0922 00:25:50.917684 34 dump.go:128] kube-system/kube-controller-manager-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:25:50.917702 34 dump.go:134] Container kube-controller-manager ready: true, restart count 0 I0922 00:25:50.917721 34 dump.go:128] kube-system/kube-scheduler-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:25:50.917738 34 dump.go:134] Container kube-scheduler ready: true, restart count 0 I0922 00:25:50.994163 34 kubelet_metrics.go:206] Latency metrics for node latest-control-plane I0922 00:25:50.994205 34 dump.go:109] Logging node info for node latest-worker I0922 00:25:50.998793 34 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker 8c88dec8-b208-4951-9edb-9daf6e60cfed 5981617 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-15 02:53:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 00:25:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 00:25:38 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 00:25:38 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 00:25:38 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 00:25:38 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7df88837db5436386878b8262a14e32,SystemUUID:7f723078-9e20-4c29-be13-6b3e065907c6,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:c0872aae4791ff427e6eda52769afa04f17b5cf756f8267e0d52774c99d5c9de docker.io/library/docker:dind],SizeBytes:145422022,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:9d12cad5c4a1a4c1a853947ae4cbf31a860f98bb450173fee644d8e63ce6ea4d registry.k8s.io/build-image/distroless-iptables:v0.7.7],SizeBytes:11558416,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 00:25:50.998851 34 dump.go:116] Logging kubelet events for node latest-worker I0922 00:25:51.002705 34 dump.go:121] Logging pods the kubelet thinks are on node latest-worker I0922 00:25:51.015845 34 dump.go:128] dra-1198/dra-test-driver-lvq9b started at 2025-09-22 00:24:19 +0000 UTC (0+1 container statuses recorded) I0922 00:25:51.015883 34 dump.go:134] Container pause ready: true, restart count 0 I0922 00:25:51.015910 34 dump.go:128] kube-system/kube-proxy-plcq4 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:25:51.015930 34 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 00:25:51.015955 34 dump.go:128] kube-system/kindnet-kcb5w started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:25:51.015974 34 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 00:25:51.015997 34 dump.go:128] kube-system/create-loop-devs-76ng2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 00:25:51.016016 34 dump.go:134] Container loopdev ready: true, restart count 0 I0922 00:25:51.016037 34 dump.go:128] dra-3384/dra-test-driver-b2v6n started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 00:25:51.016056 34 dump.go:134] Container pause ready: true, restart count 0 I0922 00:25:51.016078 34 dump.go:128] pod-resize-tests-8665/resize-test-69fmp started at 2025-09-22 00:25:29 +0000 UTC (0+1 container statuses recorded) I0922 00:25:51.016094 34 dump.go:134] Container c1 ready: true, restart count 0 I0922 00:25:51.016115 34 dump.go:128] pods-6729/pod-ready started at 2025-09-22 00:25:36 +0000 UTC (0+1 container statuses recorded) I0922 00:25:51.016133 34 dump.go:134] Container pod-readiness-gate ready: true, restart count 0 I0922 00:25:51.016158 34 dump.go:128] dra-3361/dra-test-driver-vwd92 started at 2025-09-22 00:24:56 +0000 UTC (0+1 container statuses recorded) I0922 00:25:51.016174 34 dump.go:134] Container pause ready: true, restart count 0 I0922 00:25:51.016195 34 dump.go:128] pod-resize-tests-2937/resize-test-snwvl started at 2025-09-22 00:25:42 +0000 UTC (0+2 container statuses recorded) I0922 00:25:51.016214 34 dump.go:134] Container c1 ready: false, restart count 0 I0922 00:25:51.016234 34 dump.go:134] Container c2 ready: false, restart count 0 I0922 00:25:53.122209 34 kubelet_metrics.go:206] Latency metrics for node latest-worker I0922 00:25:53.122248 34 dump.go:109] Logging node info for node latest-worker2 I0922 00:25:53.126592 34 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker2 3d1774a3-1fed-4ae4-9649-579167a82d96 5981687 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-08 02:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 00:25:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 00:25:45 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 00:25:45 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 00:25:45 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 00:25:45 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:53b074234c804259ae0ff2cba6a6f050,SystemUUID:0c517cb0-5dd6-4327-a0bd-07b70f744e34,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:831644212c5bdd0b3362b5855c87b980ea39a83c9e9adcef2ce03eced99a737a docker.io/library/docker:dind],SizeBytes:148417865,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 00:25:53.126648 34 dump.go:116] Logging kubelet events for node latest-worker2 I0922 00:25:53.130889 34 dump.go:121] Logging pods the kubelet thinks are on node latest-worker2 I0922 00:25:53.144271 34 dump.go:128] dra-9088/dra-test-driver-lpp9v started at 2025-09-22 00:25:36 +0000 UTC (0+1 container statuses recorded) I0922 00:25:53.144315 34 dump.go:134] Container pause ready: true, restart count 0 I0922 00:25:53.144339 34 dump.go:128] kube-system/kindnet-pthkx started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:25:53.144358 34 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 00:25:53.144376 34 dump.go:128] pod-resize-tests-6743/testpod started at 2025-09-22 00:25:50 +0000 UTC (0+1 container statuses recorded) I0922 00:25:53.144394 34 dump.go:134] Container c1 ready: false, restart count 0 I0922 00:25:53.144413 34 dump.go:128] pod-resize-tests-294/resize-test-zttfn started at 2025-09-22 00:25:09 +0000 UTC (0+1 container statuses recorded) I0922 00:25:53.144429 34 dump.go:134] Container c1 ready: true, restart count 0 I0922 00:25:53.144450 34 dump.go:128] container-probe-948/busybox-d76b94b1-0bf8-4c27-b332-c5e666380dc5 started at 2025-09-22 00:24:58 +0000 UTC (0+1 container statuses recorded) I0922 00:25:53.144467 34 dump.go:134] Container busybox ready: true, restart count 0 I0922 00:25:53.144487 34 dump.go:128] container-probe-3728/startup-3f074bd6-3648-4211-a896-f51c1bfed08b started at 2025-09-22 00:25:21 +0000 UTC (0+1 container statuses recorded) I0922 00:25:53.144507 34 dump.go:134] Container busybox ready: false, restart count 0 I0922 00:25:53.144526 34 dump.go:128] kube-system/kube-proxy-splj6 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:25:53.144543 34 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 00:25:53.144563 34 dump.go:128] kube-system/create-loop-devs-pw6dk started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 00:25:53.144580 34 dump.go:134] Container loopdev ready: true, restart count 0 I0922 00:25:54.668044 34 kubelet_metrics.go:206] Latency metrics for node latest-worker2 STEP: Destroying namespace "pod-resize-tests-8665" for this suite. @ 09/22/25 00:25:54.668 << Timeline [FAILED] failed to patch pod for resize: Pod "resize-test-69fmp" is invalid: spec.containers[0].resources.limits[memory]: Forbidden: memory limits cannot be decreased unless resizePolicy is RestartContainer In [It] at: k8s.io/kubernetes/test/e2e/common/node/pod_resize.go:1079 @ 09/22/25 00:25:50.858 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSS ------------------------------ • [FAILED] [3.841 seconds] [sig-node] Pod InPlace Resize Container [FeatureGate:InPlacePodVerticalScaling] [Beta] [It] decrease memory limit below usage [sig-node, FeatureGate:InPlacePodVerticalScaling, Beta] k8s.io/kubernetes/test/e2e/common/node/pod_resize.go:1280 Timeline >> STEP: Creating a kubernetes client @ 09/22/25 00:26:48.787 I0922 00:26:48.787934 25 util.go:454] >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pod-resize-tests @ 09/22/25 00:26:48.789 STEP: Waiting for a default service account to be provisioned in namespace @ 09/22/25 00:26:48.798 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 09/22/25 00:26:48.803 STEP: creating pod @ 09/22/25 00:26:48.869 STEP: verifying initial pod resources are as expected @ 09/22/25 00:26:50.889 STEP: verifying initial pod resize policy is as expected @ 09/22/25 00:26:50.89 STEP: verifying initial pod status resources are as expected @ 09/22/25 00:26:50.89 STEP: verifying initial cgroup config are as expected @ 09/22/25 00:26:50.89 I0922 00:26:50.890233 25 cgroups.go:375] Namespace pod-resize-tests-2064 Pod testpod Container c1 - looking for one of the expected cgroup values [20971520] in path /sys/fs/cgroup/memory/memory.limit_in_bytes I0922 00:26:50.890276 25 exec_util.go:63] ExecWithOptions {Command:[/bin/sh -c head -n 1 /sys/fs/cgroup/memory/memory.limit_in_bytes] Namespace:pod-resize-tests-2064 PodName:testpod ContainerName:c1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} I0922 00:26:50.890294 25 exec_util.go:68] ExecWithOptions: Clientset creation I0922 00:26:50.890372 25 exec_util.go:84] ExecWithOptions: execute(https://172.30.13.90:35009/api/v1/namespaces/pod-resize-tests-2064/pods/testpod/exec?command=%2Fbin%2Fsh&command=-c&command=head+-n+1+%2Fsys%2Ffs%2Fcgroup%2Fmemory%2Fmemory.limit_in_bytes&container=c1&stderr=true&stdout=true) I0922 00:26:50.997378 25 cgroups.go:375] Namespace pod-resize-tests-2064 Pod testpod Container c1 - looking for one of the expected cgroup values [-1] in path /sys/fs/cgroup/cpu/cpu.cfs_quota_us I0922 00:26:50.997406 25 exec_util.go:63] ExecWithOptions {Command:[/bin/sh -c head -n 1 /sys/fs/cgroup/cpu/cpu.cfs_quota_us] Namespace:pod-resize-tests-2064 PodName:testpod ContainerName:c1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} I0922 00:26:50.997416 25 exec_util.go:68] ExecWithOptions: Clientset creation I0922 00:26:50.997471 25 exec_util.go:84] ExecWithOptions: execute(https://172.30.13.90:35009/api/v1/namespaces/pod-resize-tests-2064/pods/testpod/exec?command=%2Fbin%2Fsh&command=-c&command=head+-n+1+%2Fsys%2Ffs%2Fcgroup%2Fcpu%2Fcpu.cfs_quota_us&container=c1&stderr=true&stdout=true) I0922 00:26:51.089739 25 cgroups.go:375] Namespace pod-resize-tests-2064 Pod testpod Container c1 - looking for one of the expected cgroup values [2] in path /sys/fs/cgroup/cpu/cpu.shares I0922 00:26:51.089835 25 exec_util.go:63] ExecWithOptions {Command:[/bin/sh -c head -n 1 /sys/fs/cgroup/cpu/cpu.shares] Namespace:pod-resize-tests-2064 PodName:testpod ContainerName:c1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} I0922 00:26:51.089855 25 exec_util.go:68] ExecWithOptions: Clientset creation I0922 00:26:51.089966 25 exec_util.go:84] ExecWithOptions: execute(https://172.30.13.90:35009/api/v1/namespaces/pod-resize-tests-2064/pods/testpod/exec?command=%2Fbin%2Fsh&command=-c&command=head+-n+1+%2Fsys%2Ffs%2Fcgroup%2Fcpu%2Fcpu.shares&container=c1&stderr=true&stdout=true) STEP: Patching pod with a slightly lowered memory limit @ 09/22/25 00:26:51.184 I0922 00:26:51.197650 25 pod_resize.go:1316] Unexpected error: failed to patch pod for viable lowered limit: <*errors.StatusError | 0xc000b63220>: Pod "testpod" is invalid: spec.containers[0].resources.limits[memory]: Forbidden: memory limits cannot be decreased unless resizePolicy is RestartContainer { ErrStatus: code: 422 details: causes: - field: spec.containers[0].resources.limits[memory] message: 'Forbidden: memory limits cannot be decreased unless resizePolicy is RestartContainer' reason: FieldValueForbidden kind: Pod name: testpod message: 'Pod "testpod" is invalid: spec.containers[0].resources.limits[memory]: Forbidden: memory limits cannot be decreased unless resizePolicy is RestartContainer' metadata: {} reason: Invalid status: Failure, } [FAILED] in [It] - k8s.io/kubernetes/test/e2e/common/node/pod_resize.go:1316 @ 09/22/25 00:26:51.197 I0922 00:26:51.198909 25 helper.go:125] Waiting up to 7m0s for all (but 0) nodes to be ready STEP: dump namespace information after failure @ 09/22/25 00:26:51.204 STEP: Collecting events from namespace "pod-resize-tests-2064". @ 09/22/25 00:26:51.204 STEP: Found 4 events. @ 09/22/25 00:26:51.207 I0922 00:26:51.207659 25 dump.go:53] At 2025-09-22 00:26:48 +0000 UTC - event for testpod: {default-scheduler } Scheduled: Successfully assigned pod-resize-tests-2064/testpod to latest-worker2 I0922 00:26:51.207696 25 dump.go:53] At 2025-09-22 00:26:49 +0000 UTC - event for testpod: {kubelet latest-worker2} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.37.0-1" already present on machine I0922 00:26:51.207718 25 dump.go:53] At 2025-09-22 00:26:49 +0000 UTC - event for testpod: {kubelet latest-worker2} Created: Created container: c1 I0922 00:26:51.207737 25 dump.go:53] At 2025-09-22 00:26:49 +0000 UTC - event for testpod: {kubelet latest-worker2} Started: Started container c1 I0922 00:26:51.210987 25 resource.go:151] POD NODE PHASE GRACE CONDITIONS I0922 00:26:51.211077 25 resource.go:158] testpod latest-worker2 Running [{PodReadyToStartContainers 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:49 +0000 UTC } {Initialized 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:48 +0000 UTC } {Ready 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:49 +0000 UTC } {ContainersReady 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:49 +0000 UTC } {PodScheduled 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:48 +0000 UTC }] I0922 00:26:51.211094 25 resource.go:161] I0922 00:26:51.222498 25 dump.go:109] Logging node info for node latest-control-plane I0922 00:26:51.226503 25 dump.go:114] Node Info: &Node{ObjectMeta:{latest-control-plane 4a88f68e-a233-4137-96e6-a4d02b8e46e1 5982847 0 2025-08-14 10:01:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2025-08-14 10:01:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2025-08-14 10:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2025-08-14 10:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2025-09-22 00:26:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:50 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:50 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:50 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 00:26:50 +0000 UTC,LastTransitionTime:2025-08-14 10:01:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c7fda80d8f9641588679c7e1084a869f,SystemUUID:634ef398-8dbd-484b-881c-6c5a7cef3a32,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c registry.k8s.io/pause:3.10.1],SizeBytes:320448,},ContainerImage{Names:[registry.k8s.io/pause:3.10],SizeBytes:320368,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 00:26:51.226545 25 dump.go:116] Logging kubelet events for node latest-control-plane I0922 00:26:51.230278 25 dump.go:121] Logging pods the kubelet thinks are on node latest-control-plane I0922 00:26:51.250280 25 dump.go:128] kube-system/kube-proxy-pmhrl started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.250314 25 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 00:26:51.250339 25 dump.go:128] kube-system/create-loop-devs-mbrb2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.250357 25 dump.go:134] Container loopdev ready: true, restart count 0 I0922 00:26:51.250380 25 dump.go:128] kube-system/coredns-674b8bbfcf-2h8qn started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.250401 25 dump.go:134] Container coredns ready: true, restart count 0 I0922 00:26:51.250420 25 dump.go:128] kube-system/etcd-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.250438 25 dump.go:134] Container etcd ready: true, restart count 0 I0922 00:26:51.250457 25 dump.go:128] kube-system/kube-apiserver-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.250475 25 dump.go:134] Container kube-apiserver ready: true, restart count 0 I0922 00:26:51.250495 25 dump.go:128] kube-system/kube-controller-manager-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.250512 25 dump.go:134] Container kube-controller-manager ready: true, restart count 0 I0922 00:26:51.250532 25 dump.go:128] kube-system/kube-scheduler-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.250549 25 dump.go:134] Container kube-scheduler ready: true, restart count 0 I0922 00:26:51.250568 25 dump.go:128] kube-system/kindnet-qc9kz started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.250585 25 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 00:26:51.250601 25 dump.go:128] kube-system/coredns-674b8bbfcf-klpkw started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.250617 25 dump.go:134] Container coredns ready: true, restart count 0 I0922 00:26:51.250637 25 dump.go:128] local-path-storage/local-path-provisioner-7dc846544d-kfzml started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.250656 25 dump.go:134] Container local-path-provisioner ready: true, restart count 0 I0922 00:26:51.324783 25 kubelet_metrics.go:206] Latency metrics for node latest-control-plane I0922 00:26:51.324822 25 dump.go:109] Logging node info for node latest-worker I0922 00:26:51.329125 25 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker 8c88dec8-b208-4951-9edb-9daf6e60cfed 5982871 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-15 02:53:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 00:26:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:50 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:50 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:50 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 00:26:50 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7df88837db5436386878b8262a14e32,SystemUUID:7f723078-9e20-4c29-be13-6b3e065907c6,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:c0872aae4791ff427e6eda52769afa04f17b5cf756f8267e0d52774c99d5c9de docker.io/library/docker:dind],SizeBytes:145422022,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:9d12cad5c4a1a4c1a853947ae4cbf31a860f98bb450173fee644d8e63ce6ea4d registry.k8s.io/build-image/distroless-iptables:v0.7.7],SizeBytes:11558416,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 00:26:51.329209 25 dump.go:116] Logging kubelet events for node latest-worker I0922 00:26:51.332904 25 dump.go:121] Logging pods the kubelet thinks are on node latest-worker I0922 00:26:51.349211 25 dump.go:128] dra-3361/dra-test-driver-vwd92 started at 2025-09-22 00:24:56 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.349267 25 dump.go:134] Container pause ready: true, restart count 0 I0922 00:26:51.349310 25 dump.go:128] dra-6036/dra-test-driver-khrc2 started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.349350 25 dump.go:134] Container pause ready: true, restart count 0 I0922 00:26:51.349383 25 dump.go:128] pod-lifecycle-sleep-action-3845/pod-with-prestop-sleep-hook started at 2025-09-22 00:26:16 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.349417 25 dump.go:134] Container pod-with-prestop-sleep-hook ready: true, restart count 0 I0922 00:26:51.349455 25 dump.go:128] dra-1198/dra-test-driver-lvq9b started at 2025-09-22 00:24:19 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.349493 25 dump.go:134] Container pause ready: true, restart count 0 I0922 00:26:51.349530 25 dump.go:128] kube-system/kube-proxy-plcq4 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.349559 25 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 00:26:51.349592 25 dump.go:128] kube-system/kindnet-kcb5w started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.349621 25 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 00:26:51.349658 25 dump.go:128] kube-system/create-loop-devs-76ng2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.349688 25 dump.go:134] Container loopdev ready: true, restart count 0 I0922 00:26:51.349713 25 dump.go:128] dra-3384/dra-test-driver-b2v6n started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 00:26:51.349730 25 dump.go:134] Container pause ready: true, restart count 0 I0922 00:26:52.035813 25 kubelet_metrics.go:206] Latency metrics for node latest-worker I0922 00:26:52.035850 25 dump.go:109] Logging node info for node latest-worker2 I0922 00:26:52.039660 25 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker2 3d1774a3-1fed-4ae4-9649-579167a82d96 5982570 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-08 02:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 00:26:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 00:26:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:53b074234c804259ae0ff2cba6a6f050,SystemUUID:0c517cb0-5dd6-4327-a0bd-07b70f744e34,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:831644212c5bdd0b3362b5855c87b980ea39a83c9e9adcef2ce03eced99a737a docker.io/library/docker:dind],SizeBytes:148417865,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 00:26:52.039730 25 dump.go:116] Logging kubelet events for node latest-worker2 I0922 00:26:52.042546 25 dump.go:121] Logging pods the kubelet thinks are on node latest-worker2 I0922 00:26:52.053597 25 dump.go:128] dra-6036/dra-test-driver-hj5qv started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 00:26:52.053642 25 dump.go:134] Container pause ready: true, restart count 0 I0922 00:26:52.053656 25 dump.go:128] kube-system/create-loop-devs-pw6dk started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 00:26:52.053665 25 dump.go:134] Container loopdev ready: true, restart count 0 I0922 00:26:52.053675 25 dump.go:128] dra-9088/dra-test-driver-lpp9v started at 2025-09-22 00:25:36 +0000 UTC (0+1 container statuses recorded) I0922 00:26:52.053685 25 dump.go:134] Container pause ready: true, restart count 0 I0922 00:26:52.053695 25 dump.go:128] kube-system/kindnet-pthkx started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:26:52.053704 25 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 00:26:52.053716 25 dump.go:128] pod-resize-tests-2064/testpod started at 2025-09-22 00:26:48 +0000 UTC (0+1 container statuses recorded) I0922 00:26:52.053725 25 dump.go:134] Container c1 ready: true, restart count 0 I0922 00:26:52.053737 25 dump.go:128] container-probe-3728/startup-3f074bd6-3648-4211-a896-f51c1bfed08b started at 2025-09-22 00:25:21 +0000 UTC (0+1 container statuses recorded) I0922 00:26:52.053745 25 dump.go:134] Container busybox ready: false, restart count 0 I0922 00:26:52.053755 25 dump.go:128] dra-9101/dra-test-driver-8pr5d started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 00:26:52.053767 25 dump.go:134] Container pause ready: false, restart count 0 I0922 00:26:52.053779 25 dump.go:128] pod-resize-tests-1702/resize-test-zbt9r started at 2025-09-22 00:26:48 +0000 UTC (0+1 container statuses recorded) I0922 00:26:52.053789 25 dump.go:134] Container c1 ready: true, restart count 0 I0922 00:26:52.053800 25 dump.go:128] kube-system/kube-proxy-splj6 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:26:52.053809 25 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 00:26:52.053820 25 dump.go:128] security-context-test-450/busybox-privileged-true-637b0526-f232-4044-8766-a497d18431b8 started at 2025-09-22 00:26:44 +0000 UTC (0+1 container statuses recorded) I0922 00:26:52.053829 25 dump.go:134] Container busybox-privileged-true-637b0526-f232-4044-8766-a497d18431b8 ready: false, restart count 0 I0922 00:26:52.620472 25 kubelet_metrics.go:206] Latency metrics for node latest-worker2 STEP: Destroying namespace "pod-resize-tests-2064" for this suite. @ 09/22/25 00:26:52.621 << Timeline [FAILED] failed to patch pod for viable lowered limit: Pod "testpod" is invalid: spec.containers[0].resources.limits[memory]: Forbidden: memory limits cannot be decreased unless resizePolicy is RestartContainer In [It] at: k8s.io/kubernetes/test/e2e/common/node/pod_resize.go:1316 @ 09/22/25 00:26:51.197 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [FAILED] [5.514 seconds] [sig-node] Downward API [Feature:PodLevelResources] [FeatureGate:PodLevelResources] [Beta] Downward API tests for pod level resources [It] should provide default limits.cpu/memory from pod level resources or node allocatable [sig-node, Feature:PodLevelResources, FeatureGate:PodLevelResources, Beta] k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:480 Timeline >> STEP: Creating a kubernetes client @ 09/22/25 00:26:52.646 I0922 00:26:52.646563 25 util.go:454] >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename downward-api @ 09/22/25 00:26:52.648 STEP: Waiting for a default service account to be provisioned in namespace @ 09/22/25 00:26:52.658 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 09/22/25 00:26:52.663 STEP: Saw pod success @ 09/22/25 00:26:56.691 STEP: Checking logs from node latest-worker pod downward-api-9d3d20e2-eb07-48fd-a8de-4c891f675ae5 container dapi-container @ 09/22/25 00:26:56.706 [FAILED] in [It] - k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:572 @ 09/22/25 00:26:56.706 STEP: delete the pods @ 09/22/25 00:26:56.707 I0922 00:26:56.720085 25 helper.go:125] Waiting up to 7m0s for all (but 0) nodes to be ready STEP: dump namespace information after failure @ 09/22/25 00:26:56.724 STEP: Collecting events from namespace "downward-api-6524". @ 09/22/25 00:26:56.724 STEP: Found 4 events. @ 09/22/25 00:26:56.728 I0922 00:26:56.728189 25 dump.go:53] At 2025-09-22 00:26:52 +0000 UTC - event for downward-api-9d3d20e2-eb07-48fd-a8de-4c891f675ae5: {default-scheduler } Scheduled: Successfully assigned downward-api-6524/downward-api-9d3d20e2-eb07-48fd-a8de-4c891f675ae5 to latest-worker I0922 00:26:56.728217 25 dump.go:53] At 2025-09-22 00:26:53 +0000 UTC - event for downward-api-9d3d20e2-eb07-48fd-a8de-4c891f675ae5: {kubelet latest-worker} Pulled: Container image "registry.k8s.io/e2e-test-images/busybox:1.37.0-1" already present on machine I0922 00:26:56.728239 25 dump.go:53] At 2025-09-22 00:26:53 +0000 UTC - event for downward-api-9d3d20e2-eb07-48fd-a8de-4c891f675ae5: {kubelet latest-worker} Created: Created container: dapi-container I0922 00:26:56.728259 25 dump.go:53] At 2025-09-22 00:26:53 +0000 UTC - event for downward-api-9d3d20e2-eb07-48fd-a8de-4c891f675ae5: {kubelet latest-worker} Started: Started container dapi-container I0922 00:26:56.731194 25 resource.go:151] POD NODE PHASE GRACE CONDITIONS I0922 00:26:56.731231 25 resource.go:161] I0922 00:26:56.734944 25 dump.go:109] Logging node info for node latest-control-plane I0922 00:26:56.738559 25 dump.go:114] Node Info: &Node{ObjectMeta:{latest-control-plane 4a88f68e-a233-4137-96e6-a4d02b8e46e1 5982847 0 2025-08-14 10:01:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2025-08-14 10:01:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2025-08-14 10:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2025-08-14 10:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2025-09-22 00:26:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:50 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:50 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:50 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 00:26:50 +0000 UTC,LastTransitionTime:2025-08-14 10:01:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c7fda80d8f9641588679c7e1084a869f,SystemUUID:634ef398-8dbd-484b-881c-6c5a7cef3a32,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c registry.k8s.io/pause:3.10.1],SizeBytes:320448,},ContainerImage{Names:[registry.k8s.io/pause:3.10],SizeBytes:320368,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 00:26:56.738600 25 dump.go:116] Logging kubelet events for node latest-control-plane I0922 00:26:56.742316 25 dump.go:121] Logging pods the kubelet thinks are on node latest-control-plane I0922 00:26:56.756798 25 dump.go:128] kube-system/kindnet-qc9kz started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.756831 25 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 00:26:56.756858 25 dump.go:128] kube-system/coredns-674b8bbfcf-klpkw started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.756878 25 dump.go:134] Container coredns ready: true, restart count 0 I0922 00:26:56.756898 25 dump.go:128] local-path-storage/local-path-provisioner-7dc846544d-kfzml started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.756916 25 dump.go:134] Container local-path-provisioner ready: true, restart count 0 I0922 00:26:56.756936 25 dump.go:128] kube-system/kube-proxy-pmhrl started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.756953 25 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 00:26:56.756974 25 dump.go:128] kube-system/create-loop-devs-mbrb2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.756994 25 dump.go:134] Container loopdev ready: true, restart count 0 I0922 00:26:56.757013 25 dump.go:128] kube-system/coredns-674b8bbfcf-2h8qn started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.757030 25 dump.go:134] Container coredns ready: true, restart count 0 I0922 00:26:56.757048 25 dump.go:128] kube-system/etcd-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.757066 25 dump.go:134] Container etcd ready: true, restart count 0 I0922 00:26:56.757085 25 dump.go:128] kube-system/kube-apiserver-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.757101 25 dump.go:134] Container kube-apiserver ready: true, restart count 0 I0922 00:26:56.757122 25 dump.go:128] kube-system/kube-controller-manager-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.757139 25 dump.go:134] Container kube-controller-manager ready: true, restart count 0 I0922 00:26:56.757158 25 dump.go:128] kube-system/kube-scheduler-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.757175 25 dump.go:134] Container kube-scheduler ready: true, restart count 0 I0922 00:26:56.837239 25 kubelet_metrics.go:206] Latency metrics for node latest-control-plane I0922 00:26:56.837278 25 dump.go:109] Logging node info for node latest-worker I0922 00:26:56.841478 25 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker 8c88dec8-b208-4951-9edb-9daf6e60cfed 5982871 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-15 02:53:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 00:26:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:50 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:50 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:50 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 00:26:50 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7df88837db5436386878b8262a14e32,SystemUUID:7f723078-9e20-4c29-be13-6b3e065907c6,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:c0872aae4791ff427e6eda52769afa04f17b5cf756f8267e0d52774c99d5c9de docker.io/library/docker:dind],SizeBytes:145422022,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:9d12cad5c4a1a4c1a853947ae4cbf31a860f98bb450173fee644d8e63ce6ea4d registry.k8s.io/build-image/distroless-iptables:v0.7.7],SizeBytes:11558416,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 00:26:56.841557 25 dump.go:116] Logging kubelet events for node latest-worker I0922 00:26:56.845414 25 dump.go:121] Logging pods the kubelet thinks are on node latest-worker I0922 00:26:56.859640 25 dump.go:128] kube-system/kindnet-kcb5w started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.859678 25 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 00:26:56.859699 25 dump.go:128] kube-system/create-loop-devs-76ng2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.859718 25 dump.go:134] Container loopdev ready: true, restart count 0 I0922 00:26:56.859739 25 dump.go:128] pods-7197/pod-submit-remove-2d8d8d33-867b-4c9b-b6db-f0f2f44eb34c started at 2025-09-22 00:26:55 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.859769 25 dump.go:134] Container agnhost-container ready: true, restart count 0 I0922 00:26:56.859790 25 dump.go:128] dra-3384/dra-test-driver-b2v6n started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.859825 25 dump.go:134] Container pause ready: true, restart count 0 I0922 00:26:56.859845 25 dump.go:128] dra-3361/dra-test-driver-vwd92 started at 2025-09-22 00:24:56 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.859862 25 dump.go:134] Container pause ready: true, restart count 0 I0922 00:26:56.859880 25 dump.go:128] dra-6036/dra-test-driver-khrc2 started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.859894 25 dump.go:134] Container pause ready: true, restart count 0 I0922 00:26:56.859913 25 dump.go:128] pod-lifecycle-sleep-action-3845/pod-with-prestop-sleep-hook started at 2025-09-22 00:26:16 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.859934 25 dump.go:134] Container pod-with-prestop-sleep-hook ready: true, restart count 0 I0922 00:26:56.859952 25 dump.go:128] dra-1198/dra-test-driver-lvq9b started at 2025-09-22 00:24:19 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.859968 25 dump.go:134] Container pause ready: true, restart count 0 I0922 00:26:56.859987 25 dump.go:128] dra-9101/dra-test-driver-drd6r started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.860010 25 dump.go:134] Container pause ready: true, restart count 0 I0922 00:26:56.860031 25 dump.go:128] kube-system/kube-proxy-plcq4 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:26:56.860048 25 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 00:26:57.534545 25 kubelet_metrics.go:206] Latency metrics for node latest-worker I0922 00:26:57.534586 25 dump.go:109] Logging node info for node latest-worker2 I0922 00:26:57.539073 25 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker2 3d1774a3-1fed-4ae4-9649-579167a82d96 5982570 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-08 02:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 00:26:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 00:26:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:53b074234c804259ae0ff2cba6a6f050,SystemUUID:0c517cb0-5dd6-4327-a0bd-07b70f744e34,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:831644212c5bdd0b3362b5855c87b980ea39a83c9e9adcef2ce03eced99a737a docker.io/library/docker:dind],SizeBytes:148417865,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 00:26:57.539134 25 dump.go:116] Logging kubelet events for node latest-worker2 I0922 00:26:57.542661 25 dump.go:121] Logging pods the kubelet thinks are on node latest-worker2 I0922 00:26:57.554460 25 dump.go:128] dra-6036/dra-test-driver-hj5qv started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 00:26:57.554495 25 dump.go:134] Container pause ready: true, restart count 0 I0922 00:26:57.554519 25 dump.go:128] kube-system/create-loop-devs-pw6dk started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 00:26:57.554539 25 dump.go:134] Container loopdev ready: true, restart count 0 I0922 00:26:57.554560 25 dump.go:128] dra-9088/dra-test-driver-lpp9v started at 2025-09-22 00:25:36 +0000 UTC (0+1 container statuses recorded) I0922 00:26:57.554574 25 dump.go:134] Container pause ready: true, restart count 0 I0922 00:26:57.554590 25 dump.go:128] kube-system/kindnet-pthkx started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:26:57.554607 25 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 00:26:57.554626 25 dump.go:128] pod-resize-tests-2064/testpod started at 2025-09-22 00:26:48 +0000 UTC (0+1 container statuses recorded) I0922 00:26:57.554643 25 dump.go:134] Container c1 ready: true, restart count 0 I0922 00:26:57.554662 25 dump.go:128] container-probe-3728/startup-3f074bd6-3648-4211-a896-f51c1bfed08b started at 2025-09-22 00:25:21 +0000 UTC (0+1 container statuses recorded) I0922 00:26:57.554679 25 dump.go:134] Container busybox ready: false, restart count 0 I0922 00:26:57.554698 25 dump.go:128] dra-9101/dra-test-driver-8pr5d started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 00:26:57.554715 25 dump.go:134] Container pause ready: true, restart count 0 I0922 00:26:57.554735 25 dump.go:128] kube-system/kube-proxy-splj6 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:26:57.554752 25 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 00:26:58.151417 25 kubelet_metrics.go:206] Latency metrics for node latest-worker2 STEP: Destroying namespace "downward-api-6524" for this suite. @ 09/22/25 00:26:58.152 << Timeline [FAILED] expected "CPU_LIMIT=2" in container output: KUBERNETES_SERVICE_PORT=443 KUBERNETES_PORT=tcp://10.96.0.1:443 HOSTNAME=downward-api-9d3d20e2-eb07-48fd-a8de-4c891f675ae5 SHLVL=1 HOME=/root KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_PROTO=tcp CPU_LIMIT=88 KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443 KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_SERVICE_HOST=10.96.0.1 PWD=/ MEMORY_LIMIT=67398062080 In [It] at: k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:572 @ 09/22/25 00:26:56.706 ------------------------------ SSSSSSSS•SSSSSSSSS ------------------------------ • [FAILED] [7.720 seconds] [sig-node] Pods Extended (pod generation) [Feature:PodObservedGenerationTracking] [FeatureGate:PodObservedGenerationTracking] [Beta] Pod Generation [It] pod rejected by kubelet should have updated generation and observedGeneration [sig-node, Feature:PodObservedGenerationTracking, FeatureGate:PodObservedGenerationTracking, Beta] k8s.io/kubernetes/test/e2e/node/pods.go:605 Timeline >> STEP: Creating a kubernetes client @ 09/22/25 00:27:02.539 I0922 00:27:02.539862 19 util.go:454] >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename pods @ 09/22/25 00:27:02.541 STEP: Waiting for a default service account to be provisioned in namespace @ 09/22/25 00:27:02.55 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 09/22/25 00:27:02.555 STEP: submitting the pod to kubernetes @ 09/22/25 00:27:02.63 [FAILED] in [It] - k8s.io/kubernetes/test/e2e/node/pods.go:648 @ 09/22/25 00:27:04.65 STEP: deleting the pod @ 09/22/25 00:27:04.651 I0922 00:27:04.662246 19 helper.go:125] Waiting up to 7m0s for all (but 0) nodes to be ready STEP: dump namespace information after failure @ 09/22/25 00:27:04.666 STEP: Collecting events from namespace "pods-2950". @ 09/22/25 00:27:04.667 STEP: Found 1 events. @ 09/22/25 00:27:04.67 I0922 00:27:04.670396 19 dump.go:53] At 2025-09-22 00:27:02 +0000 UTC - event for pod-out-of-cpu: {default-scheduler } FailedScheduling: 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient cpu. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod. I0922 00:27:04.673367 19 resource.go:151] POD NODE PHASE GRACE CONDITIONS I0922 00:27:04.673404 19 resource.go:161] I0922 00:27:04.677003 19 dump.go:109] Logging node info for node latest-control-plane I0922 00:27:04.680693 19 dump.go:114] Node Info: &Node{ObjectMeta:{latest-control-plane 4a88f68e-a233-4137-96e6-a4d02b8e46e1 5983030 0 2025-08-14 10:01:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2025-08-14 10:01:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2025-08-14 10:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2025-08-14 10:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2025-09-22 00:27:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 00:27:00 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 00:27:00 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 00:27:00 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 00:27:00 +0000 UTC,LastTransitionTime:2025-08-14 10:01:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c7fda80d8f9641588679c7e1084a869f,SystemUUID:634ef398-8dbd-484b-881c-6c5a7cef3a32,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c registry.k8s.io/pause:3.10.1],SizeBytes:320448,},ContainerImage{Names:[registry.k8s.io/pause:3.10],SizeBytes:320368,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 00:27:04.680730 19 dump.go:116] Logging kubelet events for node latest-control-plane I0922 00:27:04.684381 19 dump.go:121] Logging pods the kubelet thinks are on node latest-control-plane I0922 00:27:04.698344 19 dump.go:128] kube-system/kube-scheduler-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.698381 19 dump.go:134] Container kube-scheduler ready: true, restart count 0 I0922 00:27:04.698407 19 dump.go:128] kube-system/kindnet-qc9kz started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.698427 19 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 00:27:04.698450 19 dump.go:128] kube-system/coredns-674b8bbfcf-klpkw started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.698467 19 dump.go:134] Container coredns ready: true, restart count 0 I0922 00:27:04.698490 19 dump.go:128] local-path-storage/local-path-provisioner-7dc846544d-kfzml started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.698510 19 dump.go:134] Container local-path-provisioner ready: true, restart count 0 I0922 00:27:04.698531 19 dump.go:128] kube-system/kube-proxy-pmhrl started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.698551 19 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 00:27:04.698573 19 dump.go:128] kube-system/create-loop-devs-mbrb2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.698592 19 dump.go:134] Container loopdev ready: true, restart count 0 I0922 00:27:04.698613 19 dump.go:128] kube-system/coredns-674b8bbfcf-2h8qn started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.698631 19 dump.go:134] Container coredns ready: true, restart count 0 I0922 00:27:04.698653 19 dump.go:128] kube-system/etcd-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.698672 19 dump.go:134] Container etcd ready: true, restart count 0 I0922 00:27:04.698693 19 dump.go:128] kube-system/kube-apiserver-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.698711 19 dump.go:134] Container kube-apiserver ready: true, restart count 0 I0922 00:27:04.698731 19 dump.go:128] kube-system/kube-controller-manager-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.698750 19 dump.go:134] Container kube-controller-manager ready: true, restart count 0 I0922 00:27:04.763918 19 kubelet_metrics.go:206] Latency metrics for node latest-control-plane I0922 00:27:04.763956 19 dump.go:109] Logging node info for node latest-worker I0922 00:27:04.768136 19 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker 8c88dec8-b208-4951-9edb-9daf6e60cfed 5983036 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-15 02:53:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 00:27:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 00:27:00 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 00:27:00 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 00:27:00 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 00:27:00 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7df88837db5436386878b8262a14e32,SystemUUID:7f723078-9e20-4c29-be13-6b3e065907c6,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:c0872aae4791ff427e6eda52769afa04f17b5cf756f8267e0d52774c99d5c9de docker.io/library/docker:dind],SizeBytes:145422022,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:9d12cad5c4a1a4c1a853947ae4cbf31a860f98bb450173fee644d8e63ce6ea4d registry.k8s.io/build-image/distroless-iptables:v0.7.7],SizeBytes:11558416,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 00:27:04.768190 19 dump.go:116] Logging kubelet events for node latest-worker I0922 00:27:04.772032 19 dump.go:121] Logging pods the kubelet thinks are on node latest-worker I0922 00:27:04.784641 19 dump.go:128] dra-1198/dra-test-driver-lvq9b started at 2025-09-22 00:24:19 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.784674 19 dump.go:134] Container pause ready: true, restart count 0 I0922 00:27:04.784701 19 dump.go:128] dra-9101/dra-test-driver-drd6r started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.784719 19 dump.go:134] Container pause ready: true, restart count 0 I0922 00:27:04.784740 19 dump.go:128] kube-system/kube-proxy-plcq4 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.784757 19 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 00:27:04.784778 19 dump.go:128] kube-system/kindnet-kcb5w started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.784795 19 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 00:27:04.784814 19 dump.go:128] kube-system/create-loop-devs-76ng2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.784830 19 dump.go:134] Container loopdev ready: true, restart count 0 I0922 00:27:04.784847 19 dump.go:128] dra-3384/dra-test-driver-b2v6n started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.784864 19 dump.go:134] Container pause ready: true, restart count 0 I0922 00:27:04.784883 19 dump.go:128] dra-3361/dra-test-driver-vwd92 started at 2025-09-22 00:24:56 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.784900 19 dump.go:134] Container pause ready: true, restart count 0 I0922 00:27:04.784918 19 dump.go:128] dra-6036/dra-test-driver-khrc2 started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.784935 19 dump.go:134] Container pause ready: true, restart count 0 I0922 00:27:04.784954 19 dump.go:128] pod-lifecycle-sleep-action-3845/pod-with-prestop-sleep-hook started at 2025-09-22 00:26:16 +0000 UTC (0+1 container statuses recorded) I0922 00:27:04.784971 19 dump.go:134] Container pod-with-prestop-sleep-hook ready: true, restart count 0 I0922 00:27:05.496138 19 kubelet_metrics.go:206] Latency metrics for node latest-worker I0922 00:27:05.496173 19 dump.go:109] Logging node info for node latest-worker2 I0922 00:27:05.500991 19 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker2 3d1774a3-1fed-4ae4-9649-579167a82d96 5982570 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-08 02:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 00:26:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 00:26:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 00:26:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:53b074234c804259ae0ff2cba6a6f050,SystemUUID:0c517cb0-5dd6-4327-a0bd-07b70f744e34,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:831644212c5bdd0b3362b5855c87b980ea39a83c9e9adcef2ce03eced99a737a docker.io/library/docker:dind],SizeBytes:148417865,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 00:27:05.501046 19 dump.go:116] Logging kubelet events for node latest-worker2 I0922 00:27:05.504875 19 dump.go:121] Logging pods the kubelet thinks are on node latest-worker2 I0922 00:27:05.514662 19 dump.go:128] kube-system/kindnet-pthkx started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:27:05.514687 19 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 00:27:05.514705 19 dump.go:128] pod-resize-tests-274/resize-test-psvll started at 2025-09-22 00:26:58 +0000 UTC (0+1 container statuses recorded) I0922 00:27:05.514719 19 dump.go:134] Container c1 ready: false, restart count 0 I0922 00:27:05.514735 19 dump.go:128] container-probe-3728/startup-3f074bd6-3648-4211-a896-f51c1bfed08b started at 2025-09-22 00:25:21 +0000 UTC (0+1 container statuses recorded) I0922 00:27:05.514748 19 dump.go:134] Container busybox ready: false, restart count 0 I0922 00:27:05.514764 19 dump.go:128] dra-9101/dra-test-driver-8pr5d started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 00:27:05.514777 19 dump.go:134] Container pause ready: true, restart count 0 I0922 00:27:05.514789 19 dump.go:128] kube-system/kube-proxy-splj6 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 00:27:05.514802 19 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 00:27:05.514817 19 dump.go:128] dra-6036/dra-test-driver-hj5qv started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 00:27:05.514829 19 dump.go:134] Container pause ready: true, restart count 0 I0922 00:27:05.514844 19 dump.go:128] kube-system/create-loop-devs-pw6dk started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 00:27:05.514854 19 dump.go:134] Container loopdev ready: true, restart count 0 I0922 00:27:05.514868 19 dump.go:128] dra-9088/dra-test-driver-lpp9v started at 2025-09-22 00:25:36 +0000 UTC (0+1 container statuses recorded) I0922 00:27:05.514879 19 dump.go:134] Container pause ready: true, restart count 0 I0922 00:27:10.253494 19 kubelet_metrics.go:206] Latency metrics for node latest-worker2 STEP: Destroying namespace "pods-2950" for this suite. @ 09/22/25 00:27:10.254 << Timeline [FAILED] Expected : 0 to be equivalent to : 1 In [It] at: k8s.io/kubernetes/test/e2e/node/pods.go:648 @ 09/22/25 00:27:04.65 ------------------------------ SSSSSSSSSSSSSSSSS•SSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS•SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [TIMEDOUT] [3594.731 seconds] [sig-node] [DRA] control plane [ConformanceCandidate] [BeforeEach] supports simple pod referencing inline resource claim [sig-node, DRA, ConformanceCandidate] [BeforeEach] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [It] k8s.io/kubernetes/test/e2e/dra/dra.go:850 Timeline >> STEP: Creating a kubernetes client @ 09/22/25 00:24:19.701 I0922 00:24:19.701775 27 util.go:454] >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename dra @ 09/22/25 00:24:19.702 STEP: Waiting for a default service account to be provisioned in namespace @ 09/22/25 00:24:19.711 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 09/22/25 00:24:19.715 STEP: selecting nodes @ 09/22/25 00:24:19.72 I0922 00:24:19.790849 27 deploy.go:142] testing on nodes [latest-worker] STEP: deploying driver dra-1198.k8s.io on nodes [latest-worker] @ 09/22/25 00:24:19.791 I0922 00:24:19.796808 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:24:19.796892 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:24:19.849848 27 create.go:156] creating *v1.ReplicaSet: dra-1198/dra-test-driver I0922 00:24:20.924825 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:24:20.924938 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:24:21.873791 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:24:23.320547 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:24:23.388208 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:24:23.388277 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:24:25.768030 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:24:28.534754 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:24:28.534850 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:24:29.340265 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:24:39.614266 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:24:41.017888 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:24:41.018003 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:24:57.696737 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:24:57.696819 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:25:03.006726 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:25:24.200933 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:25:24.201029 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:25:42.086368 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:25:55.071044 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:25:55.071137 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:26:22.553207 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:26:32.080148 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:26:32.080244 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:27:05.416854 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:27:05.416949 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:27:19.230872 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:27:52.000238 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:27:52.000341 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:28:19.030639 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:28:49.891137 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:28:49.891258 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:29:10.667906 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:29:27.229576 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:29:27.229674 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:29:55.406067 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:30:16.818066 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:30:16.818172 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:30:29.284573 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:30:52.934838 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:30:52.934937 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:31:18.008318 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:31:27.181244 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:31:27.181343 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:31:58.307335 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:31:58.307435 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:32:08.570446 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:32:32.037193 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:32:32.037305 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:33:08.411163 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:33:14.338077 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:33:14.338174 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:33:45.095999 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:34:01.630298 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:34:01.630397 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:34:30.679282 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:34:38.792659 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:34:38.792763 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:35:21.451702 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:35:21.451840 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:35:29.585240 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:36:03.779146 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:36:03.779250 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:36:08.156914 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:36:43.489786 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:36:43.489906 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:36:59.162480 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:37:31.690603 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:37:31.690729 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:37:36.311577 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:38:07.978457 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:38:07.978564 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:38:17.116876 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:38:53.669589 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:38:53.669719 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:39:08.739332 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:39:35.062924 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:39:35.063033 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:39:50.941877 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:40:08.851405 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:40:08.851485 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:40:45.763272 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:40:57.120480 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:40:57.120586 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:41:32.064667 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:41:32.064792 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:41:40.689129 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:42:18.842229 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:42:18.842330 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:42:33.300122 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:43:06.044947 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:43:06.045044 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:43:20.778338 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:43:50.027402 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:43:50.027509 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:44:11.970306 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:44:20.969893 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:44:20.970002 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:44:52.213338 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:44:52.213441 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:45:03.492335 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:45:41.365628 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:45:49.459023 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:45:49.459104 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:46:13.701182 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:46:20.601510 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:46:20.601613 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:47:00.438571 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:47:00.438686 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:47:03.525027 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:47:44.207381 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:47:44.207499 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:47:51.255714 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:48:30.266036 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:48:30.266127 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:48:34.559425 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:49:21.880083 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:49:24.079474 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:49:24.079551 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:50:08.993999 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:50:08.994097 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:50:11.152140 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:51:04.469038 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:51:08.015206 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:51:08.015335 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:51:42.608489 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:51:42.608594 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:51:54.682183 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:52:17.159138 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:52:17.159276 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:52:29.509301 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:53:11.929643 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:53:11.929743 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:53:24.908757 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:53:49.609591 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:53:49.609690 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:53:58.500999 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:54:19.960365 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:54:19.960461 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:54:31.663315 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:54:57.464859 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:54:57.464958 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:55:03.738342 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:55:37.387681 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:55:37.387778 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:55:55.914267 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:56:18.923420 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:56:18.923500 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:56:53.018996 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:57:13.744825 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:57:13.744925 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:57:28.806139 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:57:55.943384 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:57:55.943487 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:58:18.221352 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:58:32.982081 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:58:32.982170 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:59:01.228446 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:59:27.683648 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:59:27.683752 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:59:47.864448 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:00:06.303111 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:00:06.303207 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:00:45.498562 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:00:48.128041 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:00:48.128154 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:01:34.877270 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:01:44.260423 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:01:44.260525 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:02:34.845241 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:02:36.082670 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:02:36.082824 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:03:06.206055 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:03:18.742066 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:03:18.742164 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:03:54.227124 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:04:17.052586 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:04:17.052689 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:04:29.796675 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:05:00.593888 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:05:00.593985 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:05:27.883453 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:05:47.249765 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:05:47.249864 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:06:18.357600 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:06:39.686100 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:06:39.686209 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:07:09.814062 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:07:09.814163 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:07:09.924705 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:07:40.406924 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:07:40.407027 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:07:56.954598 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:08:27.980421 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:08:27.980534 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:08:41.343777 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:09:20.300434 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:09:20.300532 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:09:32.458953 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:10:14.394051 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:10:14.394190 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:10:19.694325 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:10:58.856032 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:10:58.856148 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:11:02.843707 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:11:46.347362 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:11:47.452146 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:11:47.452241 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:12:23.240186 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:12:23.240300 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:12:28.589490 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:12:55.788749 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:12:55.788844 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:13:08.958926 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:13:40.744107 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:13:54.177600 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:13:54.177708 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:14:39.737873 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:14:51.419767 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:14:51.419908 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:15:18.120368 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:15:47.209432 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:15:47.209517 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:15:48.274571 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:16:28.855793 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:16:28.855973 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:16:35.307854 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:17:08.700313 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:17:16.227335 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:17:16.227433 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:17:46.291191 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:17:46.291293 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:18:03.589663 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:18:37.770197 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:18:37.770362 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:18:53.247620 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:19:21.018045 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:19:21.018146 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:19:24.706124 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:20:02.126860 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:20:02.126958 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:20:15.290306 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:20:36.418256 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:20:36.418549 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:20:49.608564 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:21:20.321722 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:21:20.321823 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:21:37.390582 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:22:09.889661 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:22:19.835086 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:22:19.835185 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:23:06.768568 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:23:06.768685 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:23:07.956169 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:23:44.216195 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:23:45.440795 27 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:23:45.440893 27 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" [TIMEDOUT] in [BeforeEach] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.818 I0922 01:24:12.852015 27 deploy.go:423] Unexpected error: delete ResourceSlices of the driver: <*errors.StatusError | 0xc001002a00>: the server could not find the requested resource (delete resourceslices.resource.k8s.io) { ErrStatus: code: 404 details: causes: - message: 404 page not found reason: UnexpectedServerResponse group: resource.k8s.io kind: resourceslices message: the server could not find the requested resource (delete resourceslices.resource.k8s.io) metadata: {} reason: NotFound status: Failure, } [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:423 @ 09/22/25 01:24:12.852 STEP: Waiting for ResourceSlices of driver dra-1198.k8s.io to be removed... @ 09/22/25 01:24:12.852 [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:992 @ 09/22/25 01:24:12.858 I0922 01:24:12.860567 27 helper.go:125] Waiting up to 7m0s for all (but 0) nodes to be ready STEP: dump namespace information after failure @ 09/22/25 01:24:12.906 STEP: Collecting events from namespace "dra-1198". @ 09/22/25 01:24:12.906 STEP: Found 6 events. @ 09/22/25 01:24:12.909 I0922 01:24:12.909560 27 dump.go:53] At 2025-09-22 00:24:19 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-lvq9b I0922 01:24:12.909600 27 dump.go:53] At 2025-09-22 00:24:19 +0000 UTC - event for dra-test-driver-lvq9b: {default-scheduler } Scheduled: Successfully assigned dra-1198/dra-test-driver-lvq9b to latest-worker I0922 01:24:12.909616 27 dump.go:53] At 2025-09-22 00:24:20 +0000 UTC - event for dra-test-driver-lvq9b: {kubelet latest-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine I0922 01:24:12.909640 27 dump.go:53] At 2025-09-22 00:24:20 +0000 UTC - event for dra-test-driver-lvq9b: {kubelet latest-worker} Created: Created container: pause I0922 01:24:12.909652 27 dump.go:53] At 2025-09-22 00:24:20 +0000 UTC - event for dra-test-driver-lvq9b: {kubelet latest-worker} Started: Started container pause I0922 01:24:12.909662 27 dump.go:53] At 2025-09-22 01:24:12 +0000 UTC - event for dra-test-driver-lvq9b: {kubelet latest-worker} Killing: Stopping container pause I0922 01:24:12.912642 27 resource.go:151] POD NODE PHASE GRACE CONDITIONS I0922 01:24:12.912719 27 resource.go:158] dra-test-driver-lvq9b latest-worker Running 30s [{PodReadyToStartContainers 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:24:21 +0000 UTC } {Initialized 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:24:19 +0000 UTC } {Ready 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:24:21 +0000 UTC } {ContainersReady 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:24:21 +0000 UTC } {PodScheduled 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:24:19 +0000 UTC }] I0922 01:24:12.912731 27 resource.go:161] I0922 01:24:13.005949 27 dump.go:109] Logging node info for node latest-control-plane I0922 01:24:13.010625 27 dump.go:114] Node Info: &Node{ObjectMeta:{latest-control-plane 4a88f68e-a233-4137-96e6-a4d02b8e46e1 5988996 0 2025-08-14 10:01:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2025-08-14 10:01:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2025-08-14 10:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2025-08-14 10:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2025-09-22 01:23:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c7fda80d8f9641588679c7e1084a869f,SystemUUID:634ef398-8dbd-484b-881c-6c5a7cef3a32,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c registry.k8s.io/pause:3.10.1],SizeBytes:320448,},ContainerImage{Names:[registry.k8s.io/pause:3.10],SizeBytes:320368,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.010693 27 dump.go:116] Logging kubelet events for node latest-control-plane I0922 01:24:13.013858 27 dump.go:121] Logging pods the kubelet thinks are on node latest-control-plane I0922 01:24:13.035191 27 dump.go:128] kube-system/coredns-674b8bbfcf-2h8qn started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035260 27 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.035284 27 dump.go:128] kube-system/etcd-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035303 27 dump.go:134] Container etcd ready: true, restart count 0 I0922 01:24:13.035321 27 dump.go:128] kube-system/kube-apiserver-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035338 27 dump.go:134] Container kube-apiserver ready: true, restart count 0 I0922 01:24:13.035358 27 dump.go:128] kube-system/kube-controller-manager-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035375 27 dump.go:134] Container kube-controller-manager ready: true, restart count 0 I0922 01:24:13.035394 27 dump.go:128] kube-system/kube-scheduler-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035411 27 dump.go:134] Container kube-scheduler ready: true, restart count 0 I0922 01:24:13.035434 27 dump.go:128] kube-system/kindnet-qc9kz started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035453 27 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.035470 27 dump.go:128] kube-system/coredns-674b8bbfcf-klpkw started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035486 27 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.035528 27 dump.go:128] local-path-storage/local-path-provisioner-7dc846544d-kfzml started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035548 27 dump.go:134] Container local-path-provisioner ready: true, restart count 0 I0922 01:24:13.035567 27 dump.go:128] kube-system/kube-proxy-pmhrl started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035584 27 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.035603 27 dump.go:128] kube-system/create-loop-devs-mbrb2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035619 27 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.104961 27 kubelet_metrics.go:206] Latency metrics for node latest-control-plane I0922 01:24:13.105002 27 dump.go:109] Logging node info for node latest-worker I0922 01:24:13.109916 27 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker 8c88dec8-b208-4951-9edb-9daf6e60cfed 5988936 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-15 02:53:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:23:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7df88837db5436386878b8262a14e32,SystemUUID:7f723078-9e20-4c29-be13-6b3e065907c6,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:c0872aae4791ff427e6eda52769afa04f17b5cf756f8267e0d52774c99d5c9de docker.io/library/docker:dind],SizeBytes:145422022,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:9d12cad5c4a1a4c1a853947ae4cbf31a860f98bb450173fee644d8e63ce6ea4d registry.k8s.io/build-image/distroless-iptables:v0.7.7],SizeBytes:11558416,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.109999 27 dump.go:116] Logging kubelet events for node latest-worker I0922 01:24:13.113562 27 dump.go:121] Logging pods the kubelet thinks are on node latest-worker I0922 01:24:13.128121 27 dump.go:128] dra-2483/dra-test-driver-9mlzl started at 2025-09-22 00:28:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.128159 27 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.128184 27 dump.go:128] kube-system/create-loop-devs-76ng2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.128203 27 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.128224 27 dump.go:128] dra-3384/dra-test-driver-b2v6n started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.128242 27 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.128261 27 dump.go:128] dra-3118/dra-test-driver-2r9tq started at 2025-09-22 00:27:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.128279 27 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.128299 27 dump.go:128] dra-3361/dra-test-driver-vwd92 started at 2025-09-22 00:24:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.128312 27 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.128352 27 dump.go:128] dra-6036/dra-test-driver-khrc2 started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.128370 27 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.128389 27 dump.go:128] dra-1198/dra-test-driver-lvq9b started at 2025-09-22 00:24:19 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.128413 27 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.128433 27 dump.go:128] dra-9101/dra-test-driver-drd6r started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.128450 27 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.128469 27 dump.go:128] kube-system/kube-proxy-plcq4 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.128493 27 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.128509 27 dump.go:128] kube-system/kindnet-kcb5w started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.128526 27 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.788466 27 kubelet_metrics.go:206] Latency metrics for node latest-worker I0922 01:24:13.788507 27 dump.go:109] Logging node info for node latest-worker2 I0922 01:24:13.794300 27 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker2 3d1774a3-1fed-4ae4-9649-579167a82d96 5988882 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-08 02:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:53b074234c804259ae0ff2cba6a6f050,SystemUUID:0c517cb0-5dd6-4327-a0bd-07b70f744e34,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:831644212c5bdd0b3362b5855c87b980ea39a83c9e9adcef2ce03eced99a737a docker.io/library/docker:dind],SizeBytes:148417865,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.794369 27 dump.go:116] Logging kubelet events for node latest-worker2 I0922 01:24:13.798220 27 dump.go:121] Logging pods the kubelet thinks are on node latest-worker2 I0922 01:24:13.812807 27 dump.go:128] dra-6036/dra-test-driver-hj5qv started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.812858 27 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.812902 27 dump.go:128] kube-system/create-loop-devs-pw6dk started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.812924 27 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.812944 27 dump.go:128] dra-9088/dra-test-driver-lpp9v started at 2025-09-22 00:25:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.812963 27 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.812982 27 dump.go:128] kube-system/kindnet-pthkx started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.812999 27 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.813018 27 dump.go:128] dra-9101/dra-test-driver-8pr5d started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.813035 27 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.813054 27 dump.go:128] kube-system/kube-proxy-splj6 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.813072 27 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.813091 27 dump.go:128] dra-9212/dra-test-driver-stcxz started at 2025-09-22 00:29:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.813104 27 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:14.426009 27 kubelet_metrics.go:206] Latency metrics for node latest-worker2 STEP: Destroying namespace "dra-1198" for this suite. @ 09/22/25 01:24:14.426 << Timeline [TIMEDOUT] A suite timeout occurred In [BeforeEach] at: k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.818 This is the Progress Report generated when the suite timeout occurred: [sig-node] [DRA] control plane [ConformanceCandidate] supports simple pod referencing inline resource claim (Spec Runtime: 59m53.117s) k8s.io/kubernetes/test/e2e/dra/dra.go:850 In [BeforeEach] (Node Runtime: 59m53.027s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 At [By Step] deploying driver dra-1198.k8s.io on nodes [latest-worker] (Step Runtime: 59m53.027s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:397 Spec Goroutine goroutine 7682 [select] k8s.io/dynamic-resource-allocation/resourceslice.(*Controller).initInformer(0xc0001f8d80, {0x5a48ba8, 0xc000760a50}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:537 k8s.io/dynamic-resource-allocation/resourceslice.newController({0x5a48b70?, 0xc0011024e0?}, {{0xc00478d620, 0xf}, {0x5aa1d18, 0xc000b02e00}, 0xc0008df3c0, 0xc00061a148, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:419 k8s.io/dynamic-resource-allocation/resourceslice.StartController({0x5a48b70, 0xc0011024e0}, {{0xc00478d620, 0xf}, {0x5aa1d18, 0xc000b02e00}, 0xc0008df3c0, 0xc00061a148, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:179 k8s.io/dynamic-resource-allocation/kubeletplugin.(*Helper).PublishResources(0xc00071a1a0, {0xc005283440?, 0x5a3c470?}, {0xc004ae8fc0?}) k8s.io/dynamic-resource-allocation/kubeletplugin/draplugin.go:773 > k8s.io/kubernetes/test/e2e/dra/test-driver/app.StartPlugin({0x5a48b70, 0xc005283440}, {0x51bc291, 0x4}, {0xc00478d620, 0xf}, {0x5aa1d18, 0xc000b02e00}, {0xc0050903e0, 0xd}, ...) k8s.io/kubernetes/test/e2e/dra/test-driver/app/kubeletplugin.go:223 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).SetUp(0xc004f79b80, 0xc00429ecd0, 0xc005a07d40) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:615 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).Run(0xc004f79b80, 0xc00489e8c0?, 0xc003397e60?) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:323 > k8s.io/kubernetes/test/e2e/dra/utils.NewDriver.func1() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:292 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x24cce27?, 0x33755fb?}) github.com/onsi/ginkgo/v2@v2.21.0/internal/node.go:472 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:894 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode in goroutine 7478 github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:881 Goroutines of Interest goroutine 7732 [chan receive, 59 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc00598ab88) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc005127400, {0x5a3c4a0, 0xc00598ab88}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 7682 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 goroutine 7501 [sync.Cond.Wait, 59 minutes] sync.runtime_notifyListWait(0xc00120ba68, 0x0) runtime/sema.go:597 sync.(*Cond).Wait(0x51dbe59?) sync/cond.go:71 k8s.io/client-go/tools/cache.(*RealFIFO).Pop(0xc00120ba40, 0xc0011a4430) k8s.io/client-go/tools/cache/the_real_fifo.go:207 k8s.io/client-go/tools/cache.(*controller).processLoop(0xc0048b4000, {0x5a48dd8, 0xc00489e8c0}) k8s.io/client-go/tools/cache/controller.go:211 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x5a48dd8?, 0xc00489e8c0?}, 0xc005b002d0?) k8s.io/apimachinery/pkg/util/wait/backoff.go:255 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x5a48dd8, 0xc00489e8c0}, 0xc00451fdb8, {0x5a06ae0, 0xc005b002d0}, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:256 k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x5a48dd8, 0xc00489e8c0}, 0xc00451fdb8, 0x3b9aca00, 0x0, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:223 k8s.io/apimachinery/pkg/util/wait.UntilWithContext(...) k8s.io/apimachinery/pkg/util/wait/backoff.go:172 k8s.io/client-go/tools/cache.(*controller).RunWithContext(0xc0048b4000, {0x5a48dd8, 0xc00489e8c0}) k8s.io/client-go/tools/cache/controller.go:183 k8s.io/client-go/tools/cache.(*sharedIndexInformer).RunWithContext(0xc004f78000, {0x5a48dd8, 0xc00489e8c0}) k8s.io/client-go/tools/cache/shared_informer.go:587 k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc000742230?, 0x10000c0046cfce0?) k8s.io/client-go/tools/cache/shared_informer.go:526 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init.func7() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init in goroutine 7659 k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:211 goroutine 7733 [chan receive, 59 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc00598ac18) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc0057eca00, {0x5a3c4a0, 0xc00598ac18}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 7682 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ • [TIMEDOUT] [3290.147 seconds] [sig-node] [DRA] control plane [ConformanceCandidate] [BeforeEach] runs a pod without a generated resource claim [sig-node, DRA, ConformanceCandidate] [BeforeEach] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [It] k8s.io/kubernetes/test/e2e/dra/dra.go:832 Timeline >> STEP: Creating a kubernetes client @ 09/22/25 00:29:24.286 I0922 00:29:24.286852 23 util.go:454] >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename dra @ 09/22/25 00:29:24.288 STEP: Waiting for a default service account to be provisioned in namespace @ 09/22/25 00:29:24.298 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 09/22/25 00:29:24.302 STEP: selecting nodes @ 09/22/25 00:29:24.308 I0922 00:29:24.352403 23 deploy.go:142] testing on nodes [latest-worker2] STEP: deploying driver dra-9212.k8s.io on nodes [latest-worker2] @ 09/22/25 00:29:24.353 I0922 00:29:24.358373 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:29:24.358537 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:29:24.415559 23 create.go:156] creating *v1.ReplicaSet: dra-9212/dra-test-driver I0922 00:29:25.537074 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:29:25.537169 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:29:26.441934 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:29:27.249816 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:29:27.249930 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:29:27.784835 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:29:30.757286 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:29:31.626458 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:29:31.626561 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:29:35.500057 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:29:41.406844 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:29:41.406941 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:29:42.727784 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:29:56.526170 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:29:56.526247 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:30:00.225809 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:30:35.530165 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:30:35.530275 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:30:39.172668 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:31:11.186688 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:31:11.186790 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:31:22.692083 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:31:42.901608 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:31:42.901701 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:31:55.115869 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:32:30.215362 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:32:37.040730 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:32:37.040832 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:33:03.730972 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:33:25.932172 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:33:25.932255 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:33:39.446274 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:34:21.709006 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:34:21.709121 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:34:22.905391 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:34:56.053774 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:35:20.743504 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:35:20.743604 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:35:31.032773 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:35:52.042943 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:35:52.043050 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:36:22.111359 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:36:22.816818 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:36:22.816926 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:37:12.318519 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:37:12.318619 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:37:20.126239 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:37:47.363325 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:37:47.363423 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:38:04.584746 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:38:22.386077 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:38:22.386198 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:38:48.977465 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:38:54.037484 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:38:54.037607 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:39:29.244825 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:39:40.404641 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:39:40.404742 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:40:05.891072 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:40:26.471561 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:40:26.471673 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:40:45.524575 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:41:22.290218 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:41:24.769700 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:41:24.769800 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:42:01.990530 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:42:01.990631 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:42:09.556293 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:42:43.605019 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:42:43.843240 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:42:43.843319 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:43:17.343257 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:43:17.343355 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:43:41.015728 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:44:10.520234 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:44:10.520335 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:44:14.846106 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:44:48.418149 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:44:48.418286 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:45:14.095170 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:45:37.614860 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:45:37.614966 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:46:06.302367 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:46:33.048718 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:46:33.048885 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:46:51.469407 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:47:32.802819 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:47:32.802927 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:47:36.592723 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:48:11.176632 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:48:11.176716 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:48:30.022026 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:48:47.485583 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:48:47.485697 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:49:21.867350 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:49:21.867450 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:49:29.908258 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:50:00.310204 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:50:03.517804 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:50:03.517910 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:50:41.282629 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:50:52.566799 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:50:52.566909 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:51:37.943504 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:51:46.576179 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:51:46.576279 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:52:11.165441 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:52:44.292864 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:52:44.293327 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:52:45.220042 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:53:33.531663 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:53:33.531761 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:53:33.922300 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:54:04.542798 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:54:04.542901 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:54:23.220986 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:54:55.011088 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:54:59.717593 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:54:59.717712 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:55:36.985581 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:55:36.985708 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:55:52.441125 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:56:30.254834 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:56:30.254947 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:56:47.423944 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:57:00.472899 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:57:00.473009 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:57:21.676086 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:57:36.499148 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:57:36.499246 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:57:59.253434 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:58:18.265671 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:58:18.265758 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:58:32.118691 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:59:06.233422 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:59:07.919940 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:59:07.920042 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:59:37.758829 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:59:54.979962 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:59:54.980045 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:00:30.925644 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:00:53.824261 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:00:53.824364 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:01:12.437995 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:01:50.485041 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:01:52.080638 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:01:52.080721 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:02:40.081766 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:02:40.388159 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:02:40.388249 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:03:32.586450 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:03:40.346430 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:03:40.346539 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:04:13.302375 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:04:26.422211 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:04:26.422321 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:05:10.081097 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:05:25.617653 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:05:25.617750 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:05:44.447198 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:05:58.308631 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:05:58.308767 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:06:14.642260 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:06:36.071483 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:06:36.071649 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:07:00.147356 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:07:28.818882 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:07:28.818987 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:07:48.725959 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:08:07.965345 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:08:07.965445 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:08:28.580584 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:09:02.022101 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:09:04.133308 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:09:04.133408 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:09:41.789597 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:09:41.789697 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:09:50.470242 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:10:20.996822 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:10:31.560393 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:10:31.560492 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:11:04.548822 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:11:04.548932 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:11:16.148509 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:11:40.353164 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:11:40.353262 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:12:12.689305 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:12:28.807224 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:12:28.807324 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:12:59.661432 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:13:26.100427 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:13:26.100506 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:13:47.069474 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:14:04.459120 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:14:04.459199 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:14:31.426067 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:14:53.099258 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:14:53.099365 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:15:20.784213 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:15:51.980888 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:15:51.980998 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:16:13.638046 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:16:27.263353 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:16:27.263441 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:17:11.589359 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:17:14.842670 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:17:14.842769 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:17:47.581619 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:18:12.398818 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:18:12.398918 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:18:27.407522 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:19:03.929276 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:19:03.929374 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:19:22.187781 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:19:51.274540 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:19:51.274635 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:19:56.152311 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:20:22.715050 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:20:22.715156 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:20:37.269965 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:21:16.699970 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:21:16.700066 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:21:27.801435 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:21:54.542707 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:21:54.542815 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:22:21.507018 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:22:52.055086 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:22:52.055458 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:23:05.342301 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:23:40.698069 23 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:23:40.698170 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:24:03.456968 23 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" [TIMEDOUT] in [BeforeEach] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.917 I0922 01:24:12.953969 23 deploy.go:423] Unexpected error: delete ResourceSlices of the driver: <*errors.StatusError | 0xc002254960>: the server could not find the requested resource (delete resourceslices.resource.k8s.io) { ErrStatus: code: 404 details: causes: - message: 404 page not found reason: UnexpectedServerResponse group: resource.k8s.io kind: resourceslices message: the server could not find the requested resource (delete resourceslices.resource.k8s.io) metadata: {} reason: NotFound status: Failure, } [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:423 @ 09/22/25 01:24:12.954 STEP: Waiting for ResourceSlices of driver dra-9212.k8s.io to be removed... @ 09/22/25 01:24:12.954 [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:992 @ 09/22/25 01:24:12.96 I0922 01:24:12.961874 23 helper.go:125] Waiting up to 7m0s for all (but 0) nodes to be ready STEP: dump namespace information after failure @ 09/22/25 01:24:13.007 STEP: Collecting events from namespace "dra-9212". @ 09/22/25 01:24:13.007 STEP: Found 6 events. @ 09/22/25 01:24:13.01 I0922 01:24:13.010746 23 dump.go:53] At 2025-09-22 00:29:24 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-stcxz I0922 01:24:13.010775 23 dump.go:53] At 2025-09-22 00:29:24 +0000 UTC - event for dra-test-driver-stcxz: {default-scheduler } Scheduled: Successfully assigned dra-9212/dra-test-driver-stcxz to latest-worker2 I0922 01:24:13.010799 23 dump.go:53] At 2025-09-22 00:29:24 +0000 UTC - event for dra-test-driver-stcxz: {kubelet latest-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine I0922 01:24:13.010819 23 dump.go:53] At 2025-09-22 00:29:24 +0000 UTC - event for dra-test-driver-stcxz: {kubelet latest-worker2} Created: Created container: pause I0922 01:24:13.010835 23 dump.go:53] At 2025-09-22 00:29:25 +0000 UTC - event for dra-test-driver-stcxz: {kubelet latest-worker2} Started: Started container pause I0922 01:24:13.010854 23 dump.go:53] At 2025-09-22 01:24:12 +0000 UTC - event for dra-test-driver-stcxz: {kubelet latest-worker2} Killing: Stopping container pause I0922 01:24:13.014478 23 resource.go:151] POD NODE PHASE GRACE CONDITIONS I0922 01:24:13.014593 23 resource.go:158] dra-test-driver-stcxz latest-worker2 Running 30s [{PodReadyToStartContainers 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:29:26 +0000 UTC } {Initialized 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:29:24 +0000 UTC } {Ready 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:29:26 +0000 UTC } {ContainersReady 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:29:26 +0000 UTC } {PodScheduled 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:29:24 +0000 UTC }] I0922 01:24:13.014619 23 resource.go:161] I0922 01:24:13.035767 23 dump.go:109] Logging node info for node latest-control-plane I0922 01:24:13.039856 23 dump.go:114] Node Info: &Node{ObjectMeta:{latest-control-plane 4a88f68e-a233-4137-96e6-a4d02b8e46e1 5988996 0 2025-08-14 10:01:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2025-08-14 10:01:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2025-08-14 10:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2025-08-14 10:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2025-09-22 01:23:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c7fda80d8f9641588679c7e1084a869f,SystemUUID:634ef398-8dbd-484b-881c-6c5a7cef3a32,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c registry.k8s.io/pause:3.10.1],SizeBytes:320448,},ContainerImage{Names:[registry.k8s.io/pause:3.10],SizeBytes:320368,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.039898 23 dump.go:116] Logging kubelet events for node latest-control-plane I0922 01:24:13.042753 23 dump.go:121] Logging pods the kubelet thinks are on node latest-control-plane I0922 01:24:13.060953 23 dump.go:128] kube-system/etcd-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.061017 23 dump.go:134] Container etcd ready: true, restart count 0 I0922 01:24:13.061082 23 dump.go:128] kube-system/kube-apiserver-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.061102 23 dump.go:134] Container kube-apiserver ready: true, restart count 0 I0922 01:24:13.061119 23 dump.go:128] kube-system/kube-controller-manager-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.061137 23 dump.go:134] Container kube-controller-manager ready: true, restart count 0 I0922 01:24:13.061157 23 dump.go:128] kube-system/kube-scheduler-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.061174 23 dump.go:134] Container kube-scheduler ready: true, restart count 0 I0922 01:24:13.061193 23 dump.go:128] kube-system/kindnet-qc9kz started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.061209 23 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.061231 23 dump.go:128] kube-system/coredns-674b8bbfcf-klpkw started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.061253 23 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.061269 23 dump.go:128] local-path-storage/local-path-provisioner-7dc846544d-kfzml started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.061301 23 dump.go:134] Container local-path-provisioner ready: true, restart count 0 I0922 01:24:13.061321 23 dump.go:128] kube-system/kube-proxy-pmhrl started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.061337 23 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.061356 23 dump.go:128] kube-system/create-loop-devs-mbrb2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.061373 23 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.061388 23 dump.go:128] kube-system/coredns-674b8bbfcf-2h8qn started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.061402 23 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.129692 23 kubelet_metrics.go:206] Latency metrics for node latest-control-plane I0922 01:24:13.129752 23 dump.go:109] Logging node info for node latest-worker I0922 01:24:13.134982 23 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker 8c88dec8-b208-4951-9edb-9daf6e60cfed 5988936 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-15 02:53:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:23:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7df88837db5436386878b8262a14e32,SystemUUID:7f723078-9e20-4c29-be13-6b3e065907c6,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:c0872aae4791ff427e6eda52769afa04f17b5cf756f8267e0d52774c99d5c9de docker.io/library/docker:dind],SizeBytes:145422022,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:9d12cad5c4a1a4c1a853947ae4cbf31a860f98bb450173fee644d8e63ce6ea4d registry.k8s.io/build-image/distroless-iptables:v0.7.7],SizeBytes:11558416,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.135058 23 dump.go:116] Logging kubelet events for node latest-worker I0922 01:24:13.138468 23 dump.go:121] Logging pods the kubelet thinks are on node latest-worker I0922 01:24:13.155018 23 dump.go:128] dra-1198/dra-test-driver-lvq9b started at 2025-09-22 00:24:19 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.155050 23 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.155072 23 dump.go:128] dra-9101/dra-test-driver-drd6r started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.155092 23 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.155160 23 dump.go:128] kube-system/kube-proxy-plcq4 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.155186 23 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.155234 23 dump.go:128] kube-system/kindnet-kcb5w started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.155260 23 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.155291 23 dump.go:128] dra-2483/dra-test-driver-9mlzl started at 2025-09-22 00:28:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.155306 23 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.155323 23 dump.go:128] kube-system/create-loop-devs-76ng2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.155351 23 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.155370 23 dump.go:128] dra-3384/dra-test-driver-b2v6n started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.155384 23 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.155399 23 dump.go:128] dra-3118/dra-test-driver-2r9tq started at 2025-09-22 00:27:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.155413 23 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.155429 23 dump.go:128] dra-3361/dra-test-driver-vwd92 started at 2025-09-22 00:24:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.155443 23 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.155470 23 dump.go:128] dra-6036/dra-test-driver-khrc2 started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.155484 23 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.803882 23 kubelet_metrics.go:206] Latency metrics for node latest-worker I0922 01:24:13.803910 23 dump.go:109] Logging node info for node latest-worker2 I0922 01:24:13.808287 23 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker2 3d1774a3-1fed-4ae4-9649-579167a82d96 5988882 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-08 02:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:53b074234c804259ae0ff2cba6a6f050,SystemUUID:0c517cb0-5dd6-4327-a0bd-07b70f744e34,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:831644212c5bdd0b3362b5855c87b980ea39a83c9e9adcef2ce03eced99a737a docker.io/library/docker:dind],SizeBytes:148417865,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.808338 23 dump.go:116] Logging kubelet events for node latest-worker2 I0922 01:24:13.810701 23 dump.go:121] Logging pods the kubelet thinks are on node latest-worker2 I0922 01:24:13.819778 23 dump.go:128] kube-system/kube-proxy-splj6 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.819838 23 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.819854 23 dump.go:128] dra-9212/dra-test-driver-stcxz started at 2025-09-22 00:29:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.819864 23 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.819876 23 dump.go:128] dra-6036/dra-test-driver-hj5qv started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.819886 23 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.819897 23 dump.go:128] kube-system/create-loop-devs-pw6dk started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.819923 23 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.819935 23 dump.go:128] dra-9088/dra-test-driver-lpp9v started at 2025-09-22 00:25:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.819944 23 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.819955 23 dump.go:128] kube-system/kindnet-pthkx started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.819966 23 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.819977 23 dump.go:128] dra-9101/dra-test-driver-8pr5d started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.819987 23 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:14.427661 23 kubelet_metrics.go:206] Latency metrics for node latest-worker2 STEP: Destroying namespace "dra-9212" for this suite. @ 09/22/25 01:24:14.428 << Timeline [TIMEDOUT] A suite timeout occurred In [BeforeEach] at: k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.917 This is the Progress Report generated when the suite timeout occurred: [sig-node] [DRA] control plane [ConformanceCandidate] runs a pod without a generated resource claim (Spec Runtime: 54m48.631s) k8s.io/kubernetes/test/e2e/dra/dra.go:832 In [BeforeEach] (Node Runtime: 54m48.565s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 At [By Step] deploying driver dra-9212.k8s.io on nodes [latest-worker2] (Step Runtime: 54m48.565s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:397 Spec Goroutine goroutine 7927 [select] k8s.io/dynamic-resource-allocation/resourceslice.(*Controller).initInformer(0xc001a9dc80, {0x5a48ba8, 0xc001cbec80}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:537 k8s.io/dynamic-resource-allocation/resourceslice.newController({0x5a48b70?, 0xc0024a4450?}, {{0xc0006ba310, 0xf}, {0x5aa1d18, 0xc001e78700}, 0xc001004bc0, 0xc000a16dc8, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:419 k8s.io/dynamic-resource-allocation/resourceslice.StartController({0x5a48b70, 0xc0024a4450}, {{0xc0006ba310, 0xf}, {0x5aa1d18, 0xc001e78700}, 0xc001004bc0, 0xc000a16dc8, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:179 k8s.io/dynamic-resource-allocation/kubeletplugin.(*Helper).PublishResources(0xc0018a5110, {0xc00244fec0?, 0x5a3c470?}, {0xc001da5c20?}) k8s.io/dynamic-resource-allocation/kubeletplugin/draplugin.go:773 > k8s.io/kubernetes/test/e2e/dra/test-driver/app.StartPlugin({0x5a48b70, 0xc00244fec0}, {0x51bc291, 0x4}, {0xc0006ba310, 0xf}, {0x5aa1d18, 0xc001e78700}, {0xc0020b32d0, 0xe}, ...) k8s.io/kubernetes/test/e2e/dra/test-driver/app/kubeletplugin.go:223 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).SetUp(0xc000a1fad0, 0xc001315450, 0xc001728120) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:615 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).Run(0xc000a1fad0, 0xc001a162a0?, 0x0?) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:323 > k8s.io/kubernetes/test/e2e/dra/utils.NewDriver.func1() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:292 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x24cce27?, 0xc001ab19f0?}) github.com/onsi/ginkgo/v2@v2.21.0/internal/node.go:472 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:894 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode in goroutine 7532 github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:881 Goroutines of Interest goroutine 7970 [chan receive, 55 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc001319578) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc001ba0400, {0x5a3c4a0, 0xc001319578}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 7927 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 goroutine 7955 [sync.Cond.Wait, 55 minutes] sync.runtime_notifyListWait(0xc00118d7e8, 0x0) runtime/sema.go:597 sync.(*Cond).Wait(0x51dbe59?) sync/cond.go:71 k8s.io/client-go/tools/cache.(*RealFIFO).Pop(0xc00118d7c0, 0xc000cd1ad0) k8s.io/client-go/tools/cache/the_real_fifo.go:207 k8s.io/client-go/tools/cache.(*controller).processLoop(0xc00188a000, {0x5a48dd8, 0xc00209d0a0}) k8s.io/client-go/tools/cache/controller.go:211 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x5a48dd8?, 0xc00209d0a0?}, 0xc001b22ea0?) k8s.io/apimachinery/pkg/util/wait/backoff.go:255 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x5a48dd8, 0xc00209d0a0}, 0xc000187db8, {0x5a06ae0, 0xc001b22ea0}, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:256 k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x5a48dd8, 0xc00209d0a0}, 0xc000187db8, 0x3b9aca00, 0x0, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:223 k8s.io/apimachinery/pkg/util/wait.UntilWithContext(...) k8s.io/apimachinery/pkg/util/wait/backoff.go:172 k8s.io/client-go/tools/cache.(*controller).RunWithContext(0xc00188a000, {0x5a48dd8, 0xc00209d0a0}) k8s.io/client-go/tools/cache/controller.go:183 k8s.io/client-go/tools/cache.(*sharedIndexInformer).RunWithContext(0xc0006a29a0, {0x5a48dd8, 0xc00209d0a0}) k8s.io/client-go/tools/cache/shared_informer.go:587 k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc001fac500?, 0xc0004032c0?) k8s.io/client-go/tools/cache/shared_informer.go:526 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init.func7() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init in goroutine 7889 k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:211 goroutine 7971 [chan receive, 55 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc0013195f0) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc001ba0600, {0x5a3c4a0, 0xc0013195f0}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 7927 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [TIMEDOUT] [3338.355 seconds] [sig-node] [DRA] control plane [ConformanceCandidate] [BeforeEach] retries pod scheduling after creating device class [sig-node, DRA, ConformanceCandidate] [BeforeEach] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [It] k8s.io/kubernetes/test/e2e/dra/dra.go:780 Timeline >> STEP: Creating a kubernetes client @ 09/22/25 00:28:36.092 I0922 00:28:36.092988 25 util.go:454] >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename dra @ 09/22/25 00:28:36.094 STEP: Waiting for a default service account to be provisioned in namespace @ 09/22/25 00:28:36.104 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 09/22/25 00:28:36.108 STEP: selecting nodes @ 09/22/25 00:28:36.114 I0922 00:28:36.175959 25 deploy.go:142] testing on nodes [latest-worker] STEP: deploying driver dra-2483.k8s.io on nodes [latest-worker] @ 09/22/25 00:28:36.176 I0922 00:28:36.181642 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:28:36.181752 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:28:36.236873 25 create.go:156] creating *v1.ReplicaSet: dra-2483/dra-test-driver I0922 00:28:37.264268 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:28:37.264366 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:28:38.266337 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:28:39.471161 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:28:40.409209 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:28:40.409287 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:28:42.597616 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:28:45.710525 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:28:45.710628 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:28:46.292719 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:28:54.752869 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:28:54.752972 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:28:58.492233 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:29:10.014108 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:29:10.014210 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:29:20.268018 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:29:59.445703 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:29:59.445803 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:30:05.691735 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:30:40.268686 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:30:40.268822 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:30:44.990685 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:31:28.620982 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:31:39.550719 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:31:39.550816 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:32:07.760125 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:32:36.162567 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:32:36.162664 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:32:47.524557 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:33:21.779721 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:33:21.779844 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:33:23.694090 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:33:52.013772 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:33:52.013875 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:33:57.204090 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:34:48.589406 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:34:48.589501 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:34:50.819379 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:35:23.120895 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:35:23.120996 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:35:47.160388 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:36:06.634804 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:36:06.634912 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:36:27.736320 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:36:49.730941 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:36:49.731072 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:37:17.711733 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:37:37.894588 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:37:37.894685 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:37:48.420771 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:38:28.713945 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:38:28.714052 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:38:46.474998 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:39:18.066893 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:39:18.066983 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:39:19.774070 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:39:54.278110 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:40:08.849615 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:40:08.849716 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:40:38.731620 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:41:06.195442 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:41:06.195540 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:41:34.242221 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:41:36.825695 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:41:36.825801 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:42:20.946157 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:42:27.991000 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:42:27.991099 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:43:09.540468 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:43:10.983618 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:43:10.983741 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:43:54.790053 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:44:00.235188 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:44:00.235285 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:44:40.846926 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:44:59.098424 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:44:59.098525 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:45:18.665887 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:45:48.969602 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:45:48.969705 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:46:05.712062 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:46:28.547349 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:46:28.547460 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:46:38.904528 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:47:01.432222 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:47:01.432338 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:47:23.651915 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:47:53.639227 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:47:53.639331 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:47:54.714755 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:48:45.332597 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:48:52.967186 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:48:52.967282 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:49:24.066037 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:49:24.066140 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:49:35.448325 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:50:15.888000 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:50:17.921460 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:50:17.921820 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:50:59.399621 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:50:59.399773 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:51:05.556794 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:51:40.222952 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:51:51.827714 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:51:51.827855 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:52:29.330967 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:52:43.273202 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:52:43.273337 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:53:21.256761 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:53:30.824290 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:53:30.824388 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:54:17.505092 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:54:25.096706 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:54:25.096797 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:54:58.786989 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:55:21.131970 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:55:21.132069 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:55:33.867516 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:56:13.026610 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:56:13.026710 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:56:33.149814 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:57:04.632059 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:57:08.310146 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:57:08.310245 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:57:52.482799 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:57:52.482929 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:58:00.069880 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:58:32.957798 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:58:32.957890 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:58:56.852792 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:59:08.198278 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:59:08.198372 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:59:36.965869 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:59:39.247175 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:59:39.247256 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:00:14.112185 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:00:14.112284 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:00:29.875568 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:00:48.935122 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:00:48.935228 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:01:08.615874 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:01:20.293757 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:01:20.293883 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:02:03.634324 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:02:12.345073 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:02:12.345172 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:03:01.702630 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:03:03.001957 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:03:03.002051 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:03:32.965671 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:04:00.645753 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:04:00.645852 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:04:11.589362 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:04:45.953019 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:04:45.953220 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:04:51.341226 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:05:31.335551 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:05:31.335649 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:05:35.205443 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:06:22.037071 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:06:30.268799 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:06:30.268896 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:07:00.877475 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:07:28.364331 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:07:28.364427 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:07:40.922840 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:08:26.920620 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:08:26.920718 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:08:31.391910 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:09:05.261968 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:09:07.275019 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:09:07.275116 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:09:39.651739 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:09:56.488841 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:09:56.488938 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:10:12.170971 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:10:28.961714 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:10:28.961810 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:11:04.244352 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:11:12.977484 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:11:12.977584 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:11:42.721141 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:11:47.972159 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:11:47.972255 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:12:14.345773 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:12:25.621746 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:12:25.621847 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:13:07.664639 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:13:22.351167 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:13:22.351268 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:14:03.632428 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:14:03.632524 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:14:04.437805 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:14:41.897827 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:14:41.897954 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:14:41.962842 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:15:20.020713 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:15:20.020809 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:15:32.017336 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:16:08.373365 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:16:08.373466 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:16:18.942047 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:16:48.154246 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:16:48.154361 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:16:57.175714 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:17:18.410134 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:17:18.410306 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:17:30.824178 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:17:54.999018 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:17:54.999110 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:18:09.516977 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:18:28.657544 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:18:28.657654 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:18:52.179367 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:19:23.228119 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:19:23.228215 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:19:42.883747 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:20:14.762412 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:20:14.762522 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:20:29.547282 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:21:04.007366 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:21:04.007488 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:21:21.682581 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:21:40.491741 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:21:40.491880 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:22:18.566796 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:22:39.645230 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:22:39.645539 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:23:13.646818 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:23:17.340390 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:23:17.340491 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:23:49.923487 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:23:53.056219 25 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:23:53.056325 25 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" [TIMEDOUT] in [BeforeEach] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.766 I0922 01:24:12.811767 25 deploy.go:423] Unexpected error: delete ResourceSlices of the driver: <*errors.StatusError | 0xc000d1b900>: the server could not find the requested resource (delete resourceslices.resource.k8s.io) { ErrStatus: code: 404 details: causes: - message: 404 page not found reason: UnexpectedServerResponse group: resource.k8s.io kind: resourceslices message: the server could not find the requested resource (delete resourceslices.resource.k8s.io) metadata: {} reason: NotFound status: Failure, } [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:423 @ 09/22/25 01:24:12.812 STEP: Waiting for ResourceSlices of driver dra-2483.k8s.io to be removed... @ 09/22/25 01:24:12.812 [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:992 @ 09/22/25 01:24:12.817 I0922 01:24:12.819393 25 helper.go:125] Waiting up to 7m0s for all (but 0) nodes to be ready STEP: dump namespace information after failure @ 09/22/25 01:24:12.905 STEP: Collecting events from namespace "dra-2483". @ 09/22/25 01:24:12.905 STEP: Found 6 events. @ 09/22/25 01:24:12.909 I0922 01:24:12.909641 25 dump.go:53] At 2025-09-22 00:28:36 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-9mlzl I0922 01:24:12.909670 25 dump.go:53] At 2025-09-22 00:28:36 +0000 UTC - event for dra-test-driver-9mlzl: {default-scheduler } Scheduled: Successfully assigned dra-2483/dra-test-driver-9mlzl to latest-worker I0922 01:24:12.909687 25 dump.go:53] At 2025-09-22 00:28:36 +0000 UTC - event for dra-test-driver-9mlzl: {kubelet latest-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine I0922 01:24:12.909704 25 dump.go:53] At 2025-09-22 00:28:36 +0000 UTC - event for dra-test-driver-9mlzl: {kubelet latest-worker} Created: Created container: pause I0922 01:24:12.909722 25 dump.go:53] At 2025-09-22 00:28:37 +0000 UTC - event for dra-test-driver-9mlzl: {kubelet latest-worker} Started: Started container pause I0922 01:24:12.909745 25 dump.go:53] At 2025-09-22 01:24:12 +0000 UTC - event for dra-test-driver-9mlzl: {kubelet latest-worker} Killing: Stopping container pause I0922 01:24:12.912868 25 resource.go:151] POD NODE PHASE GRACE CONDITIONS I0922 01:24:12.912964 25 resource.go:158] dra-test-driver-9mlzl latest-worker Running 30s [{PodReadyToStartContainers 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:28:37 +0000 UTC } {Initialized 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:28:36 +0000 UTC } {Ready 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:28:37 +0000 UTC } {ContainersReady 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:28:37 +0000 UTC } {PodScheduled 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:28:36 +0000 UTC }] I0922 01:24:12.912985 25 resource.go:161] I0922 01:24:13.005695 25 dump.go:109] Logging node info for node latest-control-plane I0922 01:24:13.009317 25 dump.go:114] Node Info: &Node{ObjectMeta:{latest-control-plane 4a88f68e-a233-4137-96e6-a4d02b8e46e1 5988996 0 2025-08-14 10:01:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2025-08-14 10:01:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2025-08-14 10:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2025-08-14 10:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2025-09-22 01:23:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c7fda80d8f9641588679c7e1084a869f,SystemUUID:634ef398-8dbd-484b-881c-6c5a7cef3a32,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c registry.k8s.io/pause:3.10.1],SizeBytes:320448,},ContainerImage{Names:[registry.k8s.io/pause:3.10],SizeBytes:320368,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.009354 25 dump.go:116] Logging kubelet events for node latest-control-plane I0922 01:24:13.012480 25 dump.go:121] Logging pods the kubelet thinks are on node latest-control-plane I0922 01:24:13.034978 25 dump.go:128] local-path-storage/local-path-provisioner-7dc846544d-kfzml started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035034 25 dump.go:134] Container local-path-provisioner ready: true, restart count 0 I0922 01:24:13.035075 25 dump.go:128] kube-system/kube-proxy-pmhrl started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035107 25 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.035149 25 dump.go:128] kube-system/create-loop-devs-mbrb2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035183 25 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.035219 25 dump.go:128] kube-system/coredns-674b8bbfcf-2h8qn started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035250 25 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.035279 25 dump.go:128] kube-system/etcd-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035312 25 dump.go:134] Container etcd ready: true, restart count 0 I0922 01:24:13.035372 25 dump.go:128] kube-system/kube-apiserver-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035404 25 dump.go:134] Container kube-apiserver ready: true, restart count 0 I0922 01:24:13.035438 25 dump.go:128] kube-system/kube-controller-manager-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035470 25 dump.go:134] Container kube-controller-manager ready: true, restart count 0 I0922 01:24:13.035507 25 dump.go:128] kube-system/kube-scheduler-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035538 25 dump.go:134] Container kube-scheduler ready: true, restart count 0 I0922 01:24:13.035567 25 dump.go:128] kube-system/kindnet-qc9kz started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035603 25 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.035632 25 dump.go:128] kube-system/coredns-674b8bbfcf-klpkw started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035664 25 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.111821 25 kubelet_metrics.go:206] Latency metrics for node latest-control-plane I0922 01:24:13.111864 25 dump.go:109] Logging node info for node latest-worker I0922 01:24:13.117323 25 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker 8c88dec8-b208-4951-9edb-9daf6e60cfed 5988936 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-15 02:53:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:23:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7df88837db5436386878b8262a14e32,SystemUUID:7f723078-9e20-4c29-be13-6b3e065907c6,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:c0872aae4791ff427e6eda52769afa04f17b5cf756f8267e0d52774c99d5c9de docker.io/library/docker:dind],SizeBytes:145422022,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:9d12cad5c4a1a4c1a853947ae4cbf31a860f98bb450173fee644d8e63ce6ea4d registry.k8s.io/build-image/distroless-iptables:v0.7.7],SizeBytes:11558416,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.117388 25 dump.go:116] Logging kubelet events for node latest-worker I0922 01:24:13.120548 25 dump.go:121] Logging pods the kubelet thinks are on node latest-worker I0922 01:24:13.135604 25 dump.go:128] dra-3384/dra-test-driver-b2v6n started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135652 25 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135676 25 dump.go:128] dra-3118/dra-test-driver-2r9tq started at 2025-09-22 00:27:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135694 25 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135714 25 dump.go:128] dra-3361/dra-test-driver-vwd92 started at 2025-09-22 00:24:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135731 25 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135750 25 dump.go:128] dra-6036/dra-test-driver-khrc2 started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135766 25 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135786 25 dump.go:128] dra-1198/dra-test-driver-lvq9b started at 2025-09-22 00:24:19 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135819 25 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135840 25 dump.go:128] dra-9101/dra-test-driver-drd6r started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135856 25 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135875 25 dump.go:128] kube-system/kube-proxy-plcq4 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135901 25 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.135921 25 dump.go:128] kube-system/kindnet-kcb5w started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135935 25 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.135953 25 dump.go:128] dra-2483/dra-test-driver-9mlzl started at 2025-09-22 00:28:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135967 25 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135986 25 dump.go:128] kube-system/create-loop-devs-76ng2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.136003 25 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.817882 25 kubelet_metrics.go:206] Latency metrics for node latest-worker I0922 01:24:13.817909 25 dump.go:109] Logging node info for node latest-worker2 I0922 01:24:13.822112 25 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker2 3d1774a3-1fed-4ae4-9649-579167a82d96 5988882 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-08 02:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:53b074234c804259ae0ff2cba6a6f050,SystemUUID:0c517cb0-5dd6-4327-a0bd-07b70f744e34,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:831644212c5bdd0b3362b5855c87b980ea39a83c9e9adcef2ce03eced99a737a docker.io/library/docker:dind],SizeBytes:148417865,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.822147 25 dump.go:116] Logging kubelet events for node latest-worker2 I0922 01:24:13.824991 25 dump.go:121] Logging pods the kubelet thinks are on node latest-worker2 I0922 01:24:13.837295 25 dump.go:128] kube-system/kindnet-pthkx started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.837331 25 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.837354 25 dump.go:128] dra-9101/dra-test-driver-8pr5d started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.837373 25 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.837392 25 dump.go:128] kube-system/kube-proxy-splj6 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.837414 25 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.837430 25 dump.go:128] dra-9212/dra-test-driver-stcxz started at 2025-09-22 00:29:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.837446 25 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.837465 25 dump.go:128] dra-6036/dra-test-driver-hj5qv started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.837481 25 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.837500 25 dump.go:128] kube-system/create-loop-devs-pw6dk started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.837518 25 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.837534 25 dump.go:128] dra-9088/dra-test-driver-lpp9v started at 2025-09-22 00:25:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.837550 25 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:14.439713 25 kubelet_metrics.go:206] Latency metrics for node latest-worker2 STEP: Destroying namespace "dra-2483" for this suite. @ 09/22/25 01:24:14.441 << Timeline [TIMEDOUT] A suite timeout occurred In [BeforeEach] at: k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.766 This is the Progress Report generated when the suite timeout occurred: [sig-node] [DRA] control plane [ConformanceCandidate] retries pod scheduling after creating device class (Spec Runtime: 55m36.674s) k8s.io/kubernetes/test/e2e/dra/dra.go:780 In [BeforeEach] (Node Runtime: 55m36.59s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 At [By Step] deploying driver dra-2483.k8s.io on nodes [latest-worker] (Step Runtime: 55m36.59s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:397 Spec Goroutine goroutine 8735 [select] k8s.io/dynamic-resource-allocation/resourceslice.(*Controller).initInformer(0xc002f203c0, {0x5a48ba8, 0xc003cbe320}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:537 k8s.io/dynamic-resource-allocation/resourceslice.newController({0x5a48b70?, 0xc006a7b290?}, {{0xc004ece950, 0xf}, {0x5aa1d18, 0xc006bbc1c0}, 0xc002f693c0, 0xc00510a058, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:419 k8s.io/dynamic-resource-allocation/resourceslice.StartController({0x5a48b70, 0xc006a7b290}, {{0xc004ece950, 0xf}, {0x5aa1d18, 0xc006bbc1c0}, 0xc002f693c0, 0xc00510a058, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:179 k8s.io/dynamic-resource-allocation/kubeletplugin.(*Helper).PublishResources(0xc006b9c0d0, {0xc006a7ad80?, 0x5a3c470?}, {0xc006bb6240?}) k8s.io/dynamic-resource-allocation/kubeletplugin/draplugin.go:773 > k8s.io/kubernetes/test/e2e/dra/test-driver/app.StartPlugin({0x5a48b70, 0xc006a7ad80}, {0x51bc291, 0x4}, {0xc004ece950, 0xf}, {0x5aa1d18, 0xc006bbc1c0}, {0xc004a7c240, 0xd}, ...) k8s.io/kubernetes/test/e2e/dra/test-driver/app/kubeletplugin.go:223 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).SetUp(0xc005aa42c0, 0xc005a635e0, 0xc006b3d9b0) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:615 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).Run(0xc005aa42c0, 0x0?, 0x2235ee0?) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:323 > k8s.io/kubernetes/test/e2e/dra/utils.NewDriver.func1() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:292 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x5a062e0?, 0xc0030ec1e0?}) github.com/onsi/ginkgo/v2@v2.21.0/internal/node.go:472 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:894 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode in goroutine 7443 github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:881 Goroutines of Interest goroutine 8760 [sync.Cond.Wait, 57 minutes] sync.runtime_notifyListWait(0xc006bc39c8, 0x0) runtime/sema.go:597 sync.(*Cond).Wait(0x51dbe59?) sync/cond.go:71 k8s.io/client-go/tools/cache.(*RealFIFO).Pop(0xc006bc39a0, 0xc002f5d380) k8s.io/client-go/tools/cache/the_real_fifo.go:207 k8s.io/client-go/tools/cache.(*controller).processLoop(0xc006bb40b0, {0x5a48dd8, 0xc006b88a80}) k8s.io/client-go/tools/cache/controller.go:211 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x5a48dd8?, 0xc006b88a80?}, 0xc006bc1ad0?) k8s.io/apimachinery/pkg/util/wait/backoff.go:255 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x5a48dd8, 0xc006b88a80}, 0xc006405db8, {0x5a06ae0, 0xc006bc1ad0}, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:256 k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x5a48dd8, 0xc006b88a80}, 0xc006405db8, 0x3b9aca00, 0x0, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:223 k8s.io/apimachinery/pkg/util/wait.UntilWithContext(...) k8s.io/apimachinery/pkg/util/wait/backoff.go:172 k8s.io/client-go/tools/cache.(*controller).RunWithContext(0xc006bb40b0, {0x5a48dd8, 0xc006b88a80}) k8s.io/client-go/tools/cache/controller.go:183 k8s.io/client-go/tools/cache.(*sharedIndexInformer).RunWithContext(0xc0065500b0, {0x5a48dd8, 0xc006b88a80}) k8s.io/client-go/tools/cache/shared_informer.go:587 k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc003d59860?, 0x10000c006a47900?) k8s.io/client-go/tools/cache/shared_informer.go:526 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init.func7() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init in goroutine 8745 k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:211 goroutine 7462 [chan receive, 55 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc0046f2948) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc005fc9200, {0x5a3c4a0, 0xc0046f2948}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 8735 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 goroutine 7461 [chan receive, 55 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc0046f28b8) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc005fc9000, {0x5a3c4a0, 0xc0046f28b8}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 8735 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ • [TIMEDOUT] [3601.255 seconds] [sig-node] [DRA] control plane [ConformanceCandidate] [BeforeEach] supports external claim referenced by multiple containers of multiple pods [sig-node, DRA, ConformanceCandidate] [BeforeEach] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [It] k8s.io/kubernetes/test/e2e/dra/dra.go:881 Timeline >> STEP: Creating a kubernetes client @ 09/22/25 00:24:13.197 I0922 00:24:13.197588 18 util.go:454] >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename dra @ 09/22/25 00:24:13.198 STEP: Waiting for a default service account to be provisioned in namespace @ 09/22/25 00:24:13.206 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 09/22/25 00:24:13.209 STEP: selecting nodes @ 09/22/25 00:24:13.213 I0922 00:24:13.292593 18 deploy.go:142] testing on nodes [latest-worker] STEP: deploying driver dra-3384.k8s.io on nodes [latest-worker] @ 09/22/25 00:24:13.293 I0922 00:24:13.298423 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:24:13.298567 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:24:13.373525 18 create.go:156] creating *v1.ReplicaSet: dra-3384/dra-test-driver I0922 00:24:14.275529 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:24:14.275612 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:24:15.397388 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:24:16.292746 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:24:16.292817 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:24:16.707191 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:24:19.347950 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:24:20.485605 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:24:20.485696 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:24:23.925266 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:24:28.585150 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:24:28.585249 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:24:30.431971 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:24:46.311031 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:24:52.828334 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:24:52.828433 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:25:32.559066 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:25:42.315558 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:25:42.315656 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:26:27.568355 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:26:27.568442 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:26:30.336305 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:27:00.775779 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:27:07.691294 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:27:07.691392 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:27:43.791865 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:27:48.220008 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:27:48.220092 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:28:33.252288 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:28:42.329684 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:28:42.329786 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:29:26.638545 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:29:30.553225 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:29:30.553355 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:30:24.430237 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:30:24.430350 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:30:26.384172 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:31:06.014451 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:31:13.206300 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:31:13.206401 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:31:59.805784 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:32:03.470218 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:32:03.470317 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:32:30.731330 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:32:51.801565 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:32:51.801639 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:33:12.642247 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:33:40.650860 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:33:40.650960 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:33:53.174695 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:34:11.259446 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:34:11.259544 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:34:48.570561 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:34:54.681133 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:34:54.681236 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:35:18.631901 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:35:29.340753 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:35:29.340846 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:36:04.114276 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:36:08.749334 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:36:08.749432 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:36:38.855510 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:37:07.784691 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:37:07.784806 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:37:28.589186 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:37:55.976974 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:37:55.977073 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:38:01.770167 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:38:54.206854 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:38:54.206957 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:38:59.257274 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:39:52.946089 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:39:52.946192 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:39:58.798666 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:40:38.321316 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:40:38.813252 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:40:38.813351 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:41:08.579093 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:41:32.283597 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:41:32.283677 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:42:02.762152 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:42:05.542328 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:42:05.542430 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:42:41.632468 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:42:43.445431 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:42:43.445512 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:43:16.241732 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:43:42.929884 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:43:42.929986 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:44:06.712078 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:44:17.962162 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:44:17.962274 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:44:58.403604 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:45:15.077556 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:45:15.077661 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:45:49.161108 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:45:58.361727 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:45:58.361811 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:46:21.500315 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:46:39.601698 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:46:39.601796 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:47:15.584992 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:47:17.944396 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:47:17.944484 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:48:10.768415 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:48:10.768524 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:48:11.067954 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:49:00.880216 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:49:00.880332 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:49:07.752968 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:49:40.178126 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:49:40.178231 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:49:48.159598 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:50:37.057389 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:50:37.057655 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:50:43.512471 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:51:15.161060 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:51:29.541323 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:51:29.541440 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:51:50.167045 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:52:24.611678 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:52:24.611793 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:52:44.295944 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:53:03.729792 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:53:03.729873 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:53:28.850240 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:53:57.608618 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:53:57.608732 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:54:27.668299 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:54:36.475121 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:54:36.475210 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:55:07.679906 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:55:26.851792 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:55:26.851919 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:55:39.317421 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:56:12.873098 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:56:12.873176 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:56:13.249264 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:56:44.886064 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:56:44.886175 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:57:10.474840 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:57:21.112247 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:57:21.112347 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:58:07.287965 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:58:15.670177 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:58:15.670300 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:58:54.309014 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:58:57.257636 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:58:57.257715 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:59:43.055581 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:59:54.656204 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:59:54.656301 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:00:33.164050 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:00:35.697372 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:00:35.697474 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:01:13.407991 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:01:13.408137 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:01:15.221286 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:01:49.437875 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:02:10.133803 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:02:10.133909 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:02:34.642402 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:02:55.934616 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:02:55.934714 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:03:14.987356 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:03:52.831077 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:03:52.831175 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:04:03.330232 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:04:24.493696 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:04:24.493808 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:04:44.842873 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:05:16.845839 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:05:16.845920 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:05:32.316138 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:06:10.028952 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:06:12.458407 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:06:12.458546 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:06:52.469873 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:06:52.469999 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:07:01.418538 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:07:46.243161 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:07:48.558281 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:07:48.558377 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:08:20.303877 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:08:33.073466 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:08:33.073603 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:09:09.150056 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:09:11.355775 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:09:11.355907 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:09:59.586021 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:10:03.444769 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:10:03.444872 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:10:49.698728 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:10:49.698856 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:10:52.924241 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:11:21.115010 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:11:21.115116 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:11:35.422263 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:12:07.714737 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:12:07.714883 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:12:20.996446 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:12:53.578016 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:12:53.578113 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:13:10.539031 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:13:29.635270 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:13:29.635376 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:14:02.901146 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:14:02.901247 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:14:08.544283 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:14:54.420847 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:14:59.810990 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:14:59.811087 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:15:37.650033 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:15:37.650178 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:15:53.308287 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:16:20.089414 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:16:20.089612 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:16:42.841972 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:16:53.122728 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:16:53.122828 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:17:30.345911 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:17:31.344248 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:17:31.344355 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:18:28.474358 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:18:28.474480 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:18:30.068614 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:19:01.614441 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:19:17.084149 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:19:17.084255 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:19:57.530269 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:19:57.530372 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:20:01.535571 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:20:51.449156 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:20:51.449271 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:20:52.858766 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:21:24.570751 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:21:24.570852 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:21:42.212329 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:21:54.744201 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:21:54.744304 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:22:13.457425 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:22:47.778055 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:22:47.778162 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:22:56.109627 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:23:47.564988 18 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:23:47.565093 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:23:54.703299 18 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" [TIMEDOUT] in [BeforeEach] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.82 I0922 01:24:12.855199 18 deploy.go:423] Unexpected error: delete ResourceSlices of the driver: <*errors.StatusError | 0xc0026b0640>: the server could not find the requested resource (delete resourceslices.resource.k8s.io) { ErrStatus: code: 404 details: causes: - message: 404 page not found reason: UnexpectedServerResponse group: resource.k8s.io kind: resourceslices message: the server could not find the requested resource (delete resourceslices.resource.k8s.io) metadata: {} reason: NotFound status: Failure, } [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:423 @ 09/22/25 01:24:12.855 STEP: Waiting for ResourceSlices of driver dra-3384.k8s.io to be removed... @ 09/22/25 01:24:12.855 [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:992 @ 09/22/25 01:24:12.86 I0922 01:24:12.862347 18 helper.go:125] Waiting up to 7m0s for all (but 0) nodes to be ready STEP: dump namespace information after failure @ 09/22/25 01:24:12.906 STEP: Collecting events from namespace "dra-3384". @ 09/22/25 01:24:12.906 STEP: Found 6 events. @ 09/22/25 01:24:12.91 I0922 01:24:12.910091 18 dump.go:53] At 2025-09-22 00:24:13 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-b2v6n I0922 01:24:12.910119 18 dump.go:53] At 2025-09-22 00:24:13 +0000 UTC - event for dra-test-driver-b2v6n: {default-scheduler } Scheduled: Successfully assigned dra-3384/dra-test-driver-b2v6n to latest-worker I0922 01:24:12.910146 18 dump.go:53] At 2025-09-22 00:24:13 +0000 UTC - event for dra-test-driver-b2v6n: {kubelet latest-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine I0922 01:24:12.910164 18 dump.go:53] At 2025-09-22 00:24:13 +0000 UTC - event for dra-test-driver-b2v6n: {kubelet latest-worker} Created: Created container: pause I0922 01:24:12.910183 18 dump.go:53] At 2025-09-22 00:24:14 +0000 UTC - event for dra-test-driver-b2v6n: {kubelet latest-worker} Started: Started container pause I0922 01:24:12.910199 18 dump.go:53] At 2025-09-22 01:24:12 +0000 UTC - event for dra-test-driver-b2v6n: {kubelet latest-worker} Killing: Stopping container pause I0922 01:24:12.912931 18 resource.go:151] POD NODE PHASE GRACE CONDITIONS I0922 01:24:12.913068 18 resource.go:158] dra-test-driver-b2v6n latest-worker Running 30s [{PodReadyToStartContainers 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:24:14 +0000 UTC } {Initialized 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:24:13 +0000 UTC } {Ready 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:24:14 +0000 UTC } {ContainersReady 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:24:14 +0000 UTC } {PodScheduled 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:24:13 +0000 UTC }] I0922 01:24:12.913089 18 resource.go:161] I0922 01:24:13.006431 18 dump.go:109] Logging node info for node latest-control-plane I0922 01:24:13.010530 18 dump.go:114] Node Info: &Node{ObjectMeta:{latest-control-plane 4a88f68e-a233-4137-96e6-a4d02b8e46e1 5988996 0 2025-08-14 10:01:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2025-08-14 10:01:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2025-08-14 10:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2025-08-14 10:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2025-09-22 01:23:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c7fda80d8f9641588679c7e1084a869f,SystemUUID:634ef398-8dbd-484b-881c-6c5a7cef3a32,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c registry.k8s.io/pause:3.10.1],SizeBytes:320448,},ContainerImage{Names:[registry.k8s.io/pause:3.10],SizeBytes:320368,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.010572 18 dump.go:116] Logging kubelet events for node latest-control-plane I0922 01:24:13.013733 18 dump.go:121] Logging pods the kubelet thinks are on node latest-control-plane I0922 01:24:13.032852 18 dump.go:128] kube-system/kube-proxy-pmhrl started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.032891 18 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.032914 18 dump.go:128] kube-system/create-loop-devs-mbrb2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.032935 18 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.032961 18 dump.go:128] kube-system/coredns-674b8bbfcf-2h8qn started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.032979 18 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.032999 18 dump.go:128] kube-system/etcd-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033016 18 dump.go:134] Container etcd ready: true, restart count 0 I0922 01:24:13.033035 18 dump.go:128] kube-system/kube-apiserver-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033053 18 dump.go:134] Container kube-apiserver ready: true, restart count 0 I0922 01:24:13.033072 18 dump.go:128] kube-system/kube-controller-manager-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033089 18 dump.go:134] Container kube-controller-manager ready: true, restart count 0 I0922 01:24:13.033126 18 dump.go:128] kube-system/kube-scheduler-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033143 18 dump.go:134] Container kube-scheduler ready: true, restart count 0 I0922 01:24:13.033162 18 dump.go:128] kube-system/kindnet-qc9kz started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033179 18 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.033222 18 dump.go:128] kube-system/coredns-674b8bbfcf-klpkw started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033240 18 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.033259 18 dump.go:128] local-path-storage/local-path-provisioner-7dc846544d-kfzml started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033277 18 dump.go:134] Container local-path-provisioner ready: true, restart count 0 I0922 01:24:13.103289 18 kubelet_metrics.go:206] Latency metrics for node latest-control-plane I0922 01:24:13.103333 18 dump.go:109] Logging node info for node latest-worker I0922 01:24:13.108541 18 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker 8c88dec8-b208-4951-9edb-9daf6e60cfed 5988936 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-15 02:53:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:23:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7df88837db5436386878b8262a14e32,SystemUUID:7f723078-9e20-4c29-be13-6b3e065907c6,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:c0872aae4791ff427e6eda52769afa04f17b5cf756f8267e0d52774c99d5c9de docker.io/library/docker:dind],SizeBytes:145422022,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:9d12cad5c4a1a4c1a853947ae4cbf31a860f98bb450173fee644d8e63ce6ea4d registry.k8s.io/build-image/distroless-iptables:v0.7.7],SizeBytes:11558416,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.108597 18 dump.go:116] Logging kubelet events for node latest-worker I0922 01:24:13.112190 18 dump.go:121] Logging pods the kubelet thinks are on node latest-worker I0922 01:24:13.126504 18 dump.go:128] dra-2483/dra-test-driver-9mlzl started at 2025-09-22 00:28:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.126573 18 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.126600 18 dump.go:128] kube-system/create-loop-devs-76ng2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.126619 18 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.126645 18 dump.go:128] dra-3384/dra-test-driver-b2v6n started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.126676 18 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.126707 18 dump.go:128] dra-3118/dra-test-driver-2r9tq started at 2025-09-22 00:27:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.126727 18 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.126746 18 dump.go:128] dra-3361/dra-test-driver-vwd92 started at 2025-09-22 00:24:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.126763 18 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.126794 18 dump.go:128] dra-6036/dra-test-driver-khrc2 started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.126826 18 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.126847 18 dump.go:128] dra-1198/dra-test-driver-lvq9b started at 2025-09-22 00:24:19 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.126888 18 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.126935 18 dump.go:128] dra-9101/dra-test-driver-drd6r started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.126963 18 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.126985 18 dump.go:128] kube-system/kube-proxy-plcq4 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.126999 18 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.127017 18 dump.go:128] kube-system/kindnet-kcb5w started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.127035 18 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.802726 18 kubelet_metrics.go:206] Latency metrics for node latest-worker I0922 01:24:13.802763 18 dump.go:109] Logging node info for node latest-worker2 I0922 01:24:13.807272 18 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker2 3d1774a3-1fed-4ae4-9649-579167a82d96 5988882 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-08 02:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:53b074234c804259ae0ff2cba6a6f050,SystemUUID:0c517cb0-5dd6-4327-a0bd-07b70f744e34,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:831644212c5bdd0b3362b5855c87b980ea39a83c9e9adcef2ce03eced99a737a docker.io/library/docker:dind],SizeBytes:148417865,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.807306 18 dump.go:116] Logging kubelet events for node latest-worker2 I0922 01:24:13.810171 18 dump.go:121] Logging pods the kubelet thinks are on node latest-worker2 I0922 01:24:13.821151 18 dump.go:128] kube-system/kindnet-pthkx started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.821190 18 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.821214 18 dump.go:128] dra-9101/dra-test-driver-8pr5d started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.821233 18 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.821253 18 dump.go:128] kube-system/kube-proxy-splj6 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.821270 18 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.821290 18 dump.go:128] dra-9212/dra-test-driver-stcxz started at 2025-09-22 00:29:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.821307 18 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.821323 18 dump.go:128] dra-6036/dra-test-driver-hj5qv started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.821340 18 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.821358 18 dump.go:128] kube-system/create-loop-devs-pw6dk started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.821375 18 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.821396 18 dump.go:128] dra-9088/dra-test-driver-lpp9v started at 2025-09-22 00:25:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.821413 18 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:14.446425 18 kubelet_metrics.go:206] Latency metrics for node latest-worker2 STEP: Destroying namespace "dra-3384" for this suite. @ 09/22/25 01:24:14.447 << Timeline [TIMEDOUT] A suite timeout occurred In [BeforeEach] at: k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.82 This is the Progress Report generated when the suite timeout occurred: [sig-node] [DRA] control plane [ConformanceCandidate] supports external claim referenced by multiple containers of multiple pods (Spec Runtime: 59m59.624s) k8s.io/kubernetes/test/e2e/dra/dra.go:881 In [BeforeEach] (Node Runtime: 59m59.528s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 At [By Step] deploying driver dra-3384.k8s.io on nodes [latest-worker] (Step Runtime: 59m59.528s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:397 Spec Goroutine goroutine 7616 [select] k8s.io/dynamic-resource-allocation/resourceslice.(*Controller).initInformer(0xc00087d680, {0x5a48ba8, 0xc00074d950}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:537 k8s.io/dynamic-resource-allocation/resourceslice.newController({0x5a48b70?, 0xc004c53c20?}, {{0xc001d00c70, 0xf}, {0x5aa1d18, 0xc000d81340}, 0xc00130a300, 0xc000675bb8, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:419 k8s.io/dynamic-resource-allocation/resourceslice.StartController({0x5a48b70, 0xc004c53c20}, {{0xc001d00c70, 0xf}, {0x5aa1d18, 0xc000d81340}, 0xc00130a300, 0xc000675bb8, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:179 k8s.io/dynamic-resource-allocation/kubeletplugin.(*Helper).PublishResources(0xc0026a3930, {0xc004c52990?, 0x5a3c470?}, {0xc004e6e5a0?}) k8s.io/dynamic-resource-allocation/kubeletplugin/draplugin.go:773 > k8s.io/kubernetes/test/e2e/dra/test-driver/app.StartPlugin({0x5a48b70, 0xc004c52990}, {0x51bc291, 0x4}, {0xc001d00c70, 0xf}, {0x5aa1d18, 0xc000d81340}, {0xc002860090, 0xd}, ...) k8s.io/kubernetes/test/e2e/dra/test-driver/app/kubeletplugin.go:223 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).SetUp(0xc005912840, 0xc003b91bd0, 0xc00093f260) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:615 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).Run(0xc005912840, 0xc002fc3768?, 0x46caa74?) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:323 > k8s.io/kubernetes/test/e2e/dra/utils.NewDriver.func1() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:292 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00469b580?, 0x672c262?}) github.com/onsi/ginkgo/v2@v2.21.0/internal/node.go:472 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:894 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode in goroutine 129 github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:881 Goroutines of Interest goroutine 7579 [sync.Cond.Wait, 59 minutes] sync.runtime_notifyListWait(0xc00075e348, 0x0) runtime/sema.go:597 sync.(*Cond).Wait(0x51dbe59?) sync/cond.go:71 k8s.io/client-go/tools/cache.(*RealFIFO).Pop(0xc00075e320, 0xc001074dc0) k8s.io/client-go/tools/cache/the_real_fifo.go:207 k8s.io/client-go/tools/cache.(*controller).processLoop(0xc001e50160, {0x5a48dd8, 0xc0020bab60}) k8s.io/client-go/tools/cache/controller.go:211 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x5a48dd8?, 0xc0020bab60?}, 0xc000c0b080?) k8s.io/apimachinery/pkg/util/wait/backoff.go:255 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x5a48dd8, 0xc0020bab60}, 0xc000181db8, {0x5a06ae0, 0xc000c0b080}, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:256 k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x5a48dd8, 0xc0020bab60}, 0xc000181db8, 0x3b9aca00, 0x0, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:223 k8s.io/apimachinery/pkg/util/wait.UntilWithContext(...) k8s.io/apimachinery/pkg/util/wait/backoff.go:172 k8s.io/client-go/tools/cache.(*controller).RunWithContext(0xc001e50160, {0x5a48dd8, 0xc0020bab60}) k8s.io/client-go/tools/cache/controller.go:183 k8s.io/client-go/tools/cache.(*sharedIndexInformer).RunWithContext(0xc001c4a000, {0x5a48dd8, 0xc0020bab60}) k8s.io/client-go/tools/cache/shared_informer.go:587 k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc0005c8a50?, 0xc0056b4a10?) k8s.io/client-go/tools/cache/shared_informer.go:526 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init.func7() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init in goroutine 799 k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:211 goroutine 7627 [chan receive, 59 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc0059964e0) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc004fab200, {0x5a3c4a0, 0xc0059964e0}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 7616 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 goroutine 7626 [chan receive, 59 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc005996468) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc000269c00, {0x5a3c4a0, 0xc005996468}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 7616 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ S ------------------------------ • [TIMEDOUT] [3517.772 seconds] [sig-node] [DRA] control plane [ConformanceCandidate] [BeforeEach] supports reusing resources [sig-node, DRA, ConformanceCandidate] [BeforeEach] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [It] k8s.io/kubernetes/test/e2e/dra/dra.go:725 Timeline >> STEP: Creating a kubernetes client @ 09/22/25 00:25:36.681 I0922 00:25:36.681323 33 util.go:454] >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename dra @ 09/22/25 00:25:36.682 STEP: Waiting for a default service account to be provisioned in namespace @ 09/22/25 00:25:36.69 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 09/22/25 00:25:36.694 STEP: selecting nodes @ 09/22/25 00:25:36.699 I0922 00:25:36.757727 33 deploy.go:142] testing on nodes [latest-worker2] STEP: deploying driver dra-9088.k8s.io on nodes [latest-worker2] @ 09/22/25 00:25:36.758 I0922 00:25:36.762367 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:25:36.762438 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:25:36.816216 33 create.go:156] creating *v1.ReplicaSet: dra-9088/dra-test-driver I0922 00:25:37.754002 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:25:37.754069 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:25:38.839245 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:25:40.193189 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:25:40.710954 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:25:40.711045 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:25:41.874628 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:25:44.205064 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:25:44.205158 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:25:45.249957 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:25:52.693038 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:25:52.693133 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:25:53.494595 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:26:10.248139 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:26:15.088740 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:26:15.088837 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:26:37.392918 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:26:43.795488 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:26:43.795594 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:27:16.002569 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:27:22.452588 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:27:22.452700 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:28:02.277809 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:28:20.390553 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:28:20.390651 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:28:57.052976 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:29:02.481097 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:29:02.481195 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:29:30.955968 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:30:00.499312 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:30:00.499435 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:30:17.145162 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:30:57.857123 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:30:57.857227 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:31:11.583184 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:31:41.550607 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:31:41.550724 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:32:09.081842 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:32:26.933194 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:32:26.933316 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:32:53.362128 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:33:25.289728 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:33:25.289815 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:33:34.657263 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:34:03.752830 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:34:03.752949 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:34:05.951880 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:34:40.138922 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:34:40.139022 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:34:45.983442 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:35:16.273551 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:35:28.330269 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:35:28.330377 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:36:00.608694 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:36:06.248787 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:36:06.248889 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:36:57.522317 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:37:02.177094 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:37:02.177226 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:37:30.271201 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:37:46.116605 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:37:46.116717 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:38:08.335906 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:38:23.894758 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:38:23.894866 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:38:46.471705 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:39:19.355585 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:39:19.355683 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:39:27.874868 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:39:58.001932 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:39:58.002084 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:40:07.998618 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:40:31.003472 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:40:31.003576 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:40:40.505430 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:41:08.728864 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:41:08.728939 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:41:28.170143 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:42:03.808280 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:42:03.808402 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:42:26.925589 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:42:34.407263 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:42:34.407363 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:43:13.287669 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:43:19.952162 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:43:19.952272 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:43:47.318101 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:44:15.456302 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:44:15.456432 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:44:39.846707 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:44:48.603621 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:44:48.603703 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:45:31.627414 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:45:44.025307 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:45:44.025427 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:46:08.739281 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:46:39.704603 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:46:39.704708 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:46:42.845802 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:47:17.619474 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:47:19.709183 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:47:19.709286 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:47:53.302163 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:47:59.089622 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:47:59.089721 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:48:38.770254 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:48:38.770351 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:48:52.852762 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:49:27.321599 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:49:27.321699 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:49:52.096736 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:50:21.561629 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:50:21.561730 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:50:40.972832 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:50:56.791051 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:50:56.791151 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:51:28.318008 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:51:30.823134 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:51:30.823248 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:52:04.120731 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:52:06.171380 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:52:06.171459 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:52:43.403365 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:52:56.579038 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:52:56.579171 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:53:27.623895 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:53:32.899689 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:53:32.899789 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:54:04.560332 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:54:05.917417 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:54:05.917517 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:54:40.012040 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:54:40.763180 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:54:40.763281 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:55:13.789843 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:55:34.197750 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:55:34.197842 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:56:12.763553 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:56:18.896500 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:56:18.896598 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:56:49.196374 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:56:53.805144 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:56:53.805239 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:57:25.960803 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:57:51.658402 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:57:51.658491 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:58:07.977992 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:58:38.238355 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:58:38.238453 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:58:54.653375 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:59:23.919844 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:59:23.919943 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:59:29.353337 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:00:09.194458 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:00:09.194569 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:00:27.340269 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:00:58.331894 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:00:58.332012 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:01:08.703122 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:01:44.469899 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:01:44.470012 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:01:51.668461 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:02:32.241966 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:02:39.155627 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:02:39.155726 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:03:06.911160 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:03:10.458460 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:03:10.458563 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:03:48.399215 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:03:53.223974 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:03:53.224083 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:04:31.776845 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:04:31.776949 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:04:43.425959 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:05:07.711729 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:05:07.711879 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:05:30.157872 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:05:59.442713 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:05:59.442821 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:06:05.516690 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:06:50.903745 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:06:50.903865 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:07:01.601317 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:07:33.776177 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:07:33.776290 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:07:41.210477 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:08:13.500471 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:08:13.500574 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:08:15.528878 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:08:53.050206 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:09:06.329825 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:09:06.329931 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:09:38.441115 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:09:38.441217 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:09:41.237596 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:10:36.728571 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:10:36.728667 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:10:39.625964 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:11:28.875689 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:11:36.660299 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:11:36.660391 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:12:08.698246 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:12:28.451353 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:12:28.451450 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:12:52.253408 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:12:59.833015 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:12:59.833088 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:13:26.087868 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:13:30.416741 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:13:30.416836 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:14:12.108349 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:14:12.108478 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:14:13.955437 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:14:55.064606 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:15:04.246129 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:15:04.246230 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:15:39.040808 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:15:45.071398 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:15:45.071505 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:16:25.686660 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:16:44.818410 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:16:44.818510 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:16:57.809729 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:17:23.975243 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:17:23.975343 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:17:49.701462 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:18:23.003506 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:18:23.003724 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:18:45.765882 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:19:11.808909 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:19:11.809003 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:19:35.079597 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:19:57.010011 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:19:57.010126 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:20:33.515885 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:20:48.079500 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:20:48.079605 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:21:17.199546 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:21:23.392252 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:21:23.392351 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:22:01.478405 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:22:01.478536 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:22:10.150316 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:22:41.352423 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:22:41.352658 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:23:09.213534 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:23:37.562288 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:23:37.562405 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:23:51.450626 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:24:10.357733 33 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:24:10.357836 33 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" [TIMEDOUT] in [BeforeEach] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.746 I0922 01:24:12.790976 33 deploy.go:423] Unexpected error: delete ResourceSlices of the driver: <*errors.StatusError | 0xc004427ea0>: the server could not find the requested resource (delete resourceslices.resource.k8s.io) { ErrStatus: code: 404 details: causes: - message: 404 page not found reason: UnexpectedServerResponse group: resource.k8s.io kind: resourceslices message: the server could not find the requested resource (delete resourceslices.resource.k8s.io) metadata: {} reason: NotFound status: Failure, } [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:423 @ 09/22/25 01:24:12.791 STEP: Waiting for ResourceSlices of driver dra-9088.k8s.io to be removed... @ 09/22/25 01:24:12.791 [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:992 @ 09/22/25 01:24:12.797 I0922 01:24:12.799413 33 helper.go:125] Waiting up to 7m0s for all (but 0) nodes to be ready STEP: dump namespace information after failure @ 09/22/25 01:24:12.805 STEP: Collecting events from namespace "dra-9088". @ 09/22/25 01:24:12.805 STEP: Found 6 events. @ 09/22/25 01:24:12.808 I0922 01:24:12.808449 33 dump.go:53] At 2025-09-22 00:25:36 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-lpp9v I0922 01:24:12.808465 33 dump.go:53] At 2025-09-22 00:25:36 +0000 UTC - event for dra-test-driver-lpp9v: {default-scheduler } Scheduled: Successfully assigned dra-9088/dra-test-driver-lpp9v to latest-worker2 I0922 01:24:12.808481 33 dump.go:53] At 2025-09-22 00:25:37 +0000 UTC - event for dra-test-driver-lpp9v: {kubelet latest-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine I0922 01:24:12.808495 33 dump.go:53] At 2025-09-22 00:25:37 +0000 UTC - event for dra-test-driver-lpp9v: {kubelet latest-worker2} Created: Created container: pause I0922 01:24:12.808515 33 dump.go:53] At 2025-09-22 00:25:37 +0000 UTC - event for dra-test-driver-lpp9v: {kubelet latest-worker2} Started: Started container pause I0922 01:24:12.808544 33 dump.go:53] At 2025-09-22 01:24:12 +0000 UTC - event for dra-test-driver-lpp9v: {kubelet latest-worker2} Killing: Stopping container pause I0922 01:24:12.812858 33 resource.go:151] POD NODE PHASE GRACE CONDITIONS I0922 01:24:12.812945 33 resource.go:158] dra-test-driver-lpp9v latest-worker2 Running 30s [{PodReadyToStartContainers 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:25:38 +0000 UTC } {Initialized 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:25:36 +0000 UTC } {Ready 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:25:38 +0000 UTC } {ContainersReady 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:25:38 +0000 UTC } {PodScheduled 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:25:36 +0000 UTC }] I0922 01:24:12.812961 33 resource.go:161] I0922 01:24:12.905311 33 dump.go:109] Logging node info for node latest-control-plane I0922 01:24:12.909621 33 dump.go:114] Node Info: &Node{ObjectMeta:{latest-control-plane 4a88f68e-a233-4137-96e6-a4d02b8e46e1 5988996 0 2025-08-14 10:01:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2025-08-14 10:01:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2025-08-14 10:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2025-08-14 10:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2025-09-22 01:23:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c7fda80d8f9641588679c7e1084a869f,SystemUUID:634ef398-8dbd-484b-881c-6c5a7cef3a32,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c registry.k8s.io/pause:3.10.1],SizeBytes:320448,},ContainerImage{Names:[registry.k8s.io/pause:3.10],SizeBytes:320368,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:12.909669 33 dump.go:116] Logging kubelet events for node latest-control-plane I0922 01:24:12.912285 33 dump.go:121] Logging pods the kubelet thinks are on node latest-control-plane I0922 01:24:12.941489 33 dump.go:128] local-path-storage/local-path-provisioner-7dc846544d-kfzml started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.941522 33 dump.go:134] Container local-path-provisioner ready: true, restart count 0 I0922 01:24:12.941546 33 dump.go:128] kube-system/kube-proxy-pmhrl started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.941565 33 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:12.941585 33 dump.go:128] kube-system/create-loop-devs-mbrb2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.941600 33 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:12.941612 33 dump.go:128] kube-system/coredns-674b8bbfcf-2h8qn started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.941622 33 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:12.941652 33 dump.go:128] kube-system/etcd-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.941668 33 dump.go:134] Container etcd ready: true, restart count 0 I0922 01:24:12.941679 33 dump.go:128] kube-system/kube-apiserver-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.941689 33 dump.go:134] Container kube-apiserver ready: true, restart count 0 I0922 01:24:12.941703 33 dump.go:128] kube-system/kube-controller-manager-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.941713 33 dump.go:134] Container kube-controller-manager ready: true, restart count 0 I0922 01:24:12.941724 33 dump.go:128] kube-system/kube-scheduler-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.941734 33 dump.go:134] Container kube-scheduler ready: true, restart count 0 I0922 01:24:12.941745 33 dump.go:128] kube-system/kindnet-qc9kz started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.941755 33 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:12.941765 33 dump.go:128] kube-system/coredns-674b8bbfcf-klpkw started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.941773 33 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.076146 33 kubelet_metrics.go:206] Latency metrics for node latest-control-plane I0922 01:24:13.076186 33 dump.go:109] Logging node info for node latest-worker I0922 01:24:13.081279 33 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker 8c88dec8-b208-4951-9edb-9daf6e60cfed 5988936 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-15 02:53:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:23:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7df88837db5436386878b8262a14e32,SystemUUID:7f723078-9e20-4c29-be13-6b3e065907c6,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:c0872aae4791ff427e6eda52769afa04f17b5cf756f8267e0d52774c99d5c9de docker.io/library/docker:dind],SizeBytes:145422022,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:9d12cad5c4a1a4c1a853947ae4cbf31a860f98bb450173fee644d8e63ce6ea4d registry.k8s.io/build-image/distroless-iptables:v0.7.7],SizeBytes:11558416,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.081382 33 dump.go:116] Logging kubelet events for node latest-worker I0922 01:24:13.084618 33 dump.go:121] Logging pods the kubelet thinks are on node latest-worker I0922 01:24:13.101932 33 dump.go:128] dra-1198/dra-test-driver-lvq9b started at 2025-09-22 00:24:19 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.101973 33 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.102019 33 dump.go:128] dra-9101/dra-test-driver-drd6r started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102038 33 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.102059 33 dump.go:128] kube-system/kube-proxy-plcq4 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102076 33 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.102095 33 dump.go:128] kube-system/kindnet-kcb5w started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102112 33 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.102131 33 dump.go:128] dra-2483/dra-test-driver-9mlzl started at 2025-09-22 00:28:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102145 33 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.102164 33 dump.go:128] kube-system/create-loop-devs-76ng2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102180 33 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.102199 33 dump.go:128] dra-3384/dra-test-driver-b2v6n started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102215 33 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.102242 33 dump.go:128] dra-3118/dra-test-driver-2r9tq started at 2025-09-22 00:27:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102272 33 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.102291 33 dump.go:128] dra-3361/dra-test-driver-vwd92 started at 2025-09-22 00:24:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102304 33 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.102323 33 dump.go:128] dra-6036/dra-test-driver-khrc2 started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102340 33 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.825511 33 kubelet_metrics.go:206] Latency metrics for node latest-worker I0922 01:24:13.825536 33 dump.go:109] Logging node info for node latest-worker2 I0922 01:24:13.829498 33 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker2 3d1774a3-1fed-4ae4-9649-579167a82d96 5988882 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-08 02:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:53b074234c804259ae0ff2cba6a6f050,SystemUUID:0c517cb0-5dd6-4327-a0bd-07b70f744e34,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:831644212c5bdd0b3362b5855c87b980ea39a83c9e9adcef2ce03eced99a737a docker.io/library/docker:dind],SizeBytes:148417865,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.829550 33 dump.go:116] Logging kubelet events for node latest-worker2 I0922 01:24:13.832966 33 dump.go:121] Logging pods the kubelet thinks are on node latest-worker2 I0922 01:24:13.845305 33 dump.go:128] dra-6036/dra-test-driver-hj5qv started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.845332 33 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.845345 33 dump.go:128] kube-system/create-loop-devs-pw6dk started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.845355 33 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.845366 33 dump.go:128] dra-9088/dra-test-driver-lpp9v started at 2025-09-22 00:25:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.845376 33 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.845387 33 dump.go:128] kube-system/kindnet-pthkx started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.845396 33 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.845407 33 dump.go:128] dra-9101/dra-test-driver-8pr5d started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.845416 33 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.845427 33 dump.go:128] kube-system/kube-proxy-splj6 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.845437 33 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.845448 33 dump.go:128] dra-9212/dra-test-driver-stcxz started at 2025-09-22 00:29:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.845457 33 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:14.447233 33 kubelet_metrics.go:206] Latency metrics for node latest-worker2 STEP: Destroying namespace "dra-9088" for this suite. @ 09/22/25 01:24:14.447 << Timeline [TIMEDOUT] A suite timeout occurred In [BeforeEach] at: k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.746 This is the Progress Report generated when the suite timeout occurred: [sig-node] [DRA] control plane [ConformanceCandidate] supports reusing resources (Spec Runtime: 58m36.066s) k8s.io/kubernetes/test/e2e/dra/dra.go:725 In [BeforeEach] (Node Runtime: 58m35.988s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 At [By Step] deploying driver dra-9088.k8s.io on nodes [latest-worker2] (Step Runtime: 58m35.988s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:397 Spec Goroutine goroutine 7905 [select] k8s.io/dynamic-resource-allocation/resourceslice.(*Controller).initInformer(0xc000c1a0c0, {0x5a48ba8, 0xc0045ec320}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:537 k8s.io/dynamic-resource-allocation/resourceslice.newController({0x5a48b70?, 0xc001aceea0?}, {{0xc00341de50, 0xf}, {0x5aa1d18, 0xc001912a80}, 0xc0013cdd80, 0xc001b06098, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:419 k8s.io/dynamic-resource-allocation/resourceslice.StartController({0x5a48b70, 0xc001aceea0}, {{0xc00341de50, 0xf}, {0x5aa1d18, 0xc001912a80}, 0xc0013cdd80, 0xc001b06098, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:179 k8s.io/dynamic-resource-allocation/kubeletplugin.(*Helper).PublishResources(0xc004eb8270, {0xc001ace4e0?, 0x5a3c470?}, {0xc004ea8480?}) k8s.io/dynamic-resource-allocation/kubeletplugin/draplugin.go:773 > k8s.io/kubernetes/test/e2e/dra/test-driver/app.StartPlugin({0x5a48b70, 0xc001ace4e0}, {0x51bc291, 0x4}, {0xc00341de50, 0xf}, {0x5aa1d18, 0xc001912a80}, {0xc004d829b0, 0xe}, ...) k8s.io/kubernetes/test/e2e/dra/test-driver/app/kubeletplugin.go:223 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).SetUp(0xc00567de40, 0xc000750050, 0xc004150ab0) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:615 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).Run(0xc00567de40, 0x33839a8?, 0x2235ee0?) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:323 > k8s.io/kubernetes/test/e2e/dra/utils.NewDriver.func1() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:292 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x1a88432?, 0x337ae00?}) github.com/onsi/ginkgo/v2@v2.21.0/internal/node.go:472 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:894 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode in goroutine 7436 github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:881 Goroutines of Interest goroutine 7855 [chan receive, 58 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc004f04be8) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc001b8aa00, {0x5a3c4a0, 0xc004f04be8}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 7905 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 goroutine 7958 [sync.Cond.Wait, 59 minutes] sync.runtime_notifyListWait(0xc0044267a8, 0x0) runtime/sema.go:597 sync.(*Cond).Wait(0x51dbe59?) sync/cond.go:71 k8s.io/client-go/tools/cache.(*RealFIFO).Pop(0xc004426780, 0xc001134a30) k8s.io/client-go/tools/cache/the_real_fifo.go:207 k8s.io/client-go/tools/cache.(*controller).processLoop(0xc00447f290, {0x5a48dd8, 0xc0045c88c0}) k8s.io/client-go/tools/cache/controller.go:211 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x5a48dd8?, 0xc0045c88c0?}, 0xc0045bf200?) k8s.io/apimachinery/pkg/util/wait/backoff.go:255 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x5a48dd8, 0xc0045c88c0}, 0xc000182db8, {0x5a06ae0, 0xc0045bf200}, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:256 k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x5a48dd8, 0xc0045c88c0}, 0xc000182db8, 0x3b9aca00, 0x0, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:223 k8s.io/apimachinery/pkg/util/wait.UntilWithContext(...) k8s.io/apimachinery/pkg/util/wait/backoff.go:172 k8s.io/client-go/tools/cache.(*controller).RunWithContext(0xc00447f290, {0x5a48dd8, 0xc0045c88c0}) k8s.io/client-go/tools/cache/controller.go:183 k8s.io/client-go/tools/cache.(*sharedIndexInformer).RunWithContext(0xc0041a0dc0, {0x5a48dd8, 0xc0045c88c0}) k8s.io/client-go/tools/cache/shared_informer.go:587 k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc0041d7e00?, 0xc0045c8070?) k8s.io/client-go/tools/cache/shared_informer.go:526 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init.func7() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init in goroutine 7851 k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:211 goroutine 7856 [chan receive, 58 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc004f04c60) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc001b8ac00, {0x5a3c4a0, 0xc004f04c60}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 7905 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ • [TIMEDOUT] [3491.844 seconds] [sig-node] [DRA] control plane [ConformanceCandidate] with node-local resources [BeforeEach] uses all resources [sig-node, DRA, ConformanceCandidate] [BeforeEach] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [It] k8s.io/kubernetes/test/e2e/dra/dra.go:1027 Timeline >> STEP: Creating a kubernetes client @ 09/22/25 00:26:02.609 I0922 00:26:02.609506 21 util.go:454] >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename dra @ 09/22/25 00:26:02.611 STEP: Waiting for a default service account to be provisioned in namespace @ 09/22/25 00:26:02.62 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 09/22/25 00:26:02.625 STEP: selecting nodes @ 09/22/25 00:26:02.63 I0922 00:26:02.669254 21 deploy.go:142] testing on nodes [latest-worker latest-worker2] STEP: deploying driver dra-6036.k8s.io on nodes [latest-worker latest-worker2] @ 09/22/25 00:26:02.669 I0922 00:26:02.674933 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:26:02.675040 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:26:02.729658 21 create.go:156] creating *v1.ReplicaSet: dra-6036/dra-test-driver I0922 00:26:03.577219 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:26:03.577281 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:26:04.752771 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:26:05.865295 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:26:05.865385 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:26:06.127094 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:26:09.271467 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:26:11.392214 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:26:11.392287 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:26:14.222904 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:26:18.401690 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:26:18.401791 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:26:26.384066 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:26:33.042686 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:26:33.042783 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:26:51.420790 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:27:20.191862 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:27:20.191962 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:27:36.750927 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:28:08.602533 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:28:08.602618 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:28:18.951667 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:29:02.559899 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:29:02.560018 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:29:11.072124 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:29:56.777570 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:29:56.777672 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:30:02.001471 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:30:45.214900 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:30:48.919606 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:30:48.919706 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:31:19.023977 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:31:19.024100 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:31:23.351335 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:31:53.971974 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:31:53.972078 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:31:55.986996 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:32:48.137307 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:32:48.137410 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:32:51.601079 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:33:25.625896 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:33:25.626004 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:33:42.091957 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:33:57.967599 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:33:57.967698 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:34:20.148974 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:34:31.431621 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:34:31.431717 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:35:12.346178 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:35:21.896817 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:35:21.896920 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:35:51.060700 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:36:19.824761 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:36:19.824867 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:36:41.056451 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:37:17.235309 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:37:17.235424 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:37:21.810077 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:37:54.716532 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:37:54.716634 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:38:00.828347 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:38:52.349778 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:38:54.190985 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:38:54.191076 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:39:23.948300 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:39:37.552512 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:39:37.552608 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:40:20.280192 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:40:24.895523 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:40:24.895609 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:41:03.393414 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:41:03.393525 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:41:18.049897 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:41:43.283794 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:41:43.283936 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:41:56.891278 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:42:23.855839 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:42:23.855943 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:42:28.398996 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:43:04.276867 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:43:04.277016 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:43:04.411093 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:43:56.170433 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:44:03.530262 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:44:03.530364 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:44:51.152323 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:44:51.152441 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:44:53.752186 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:45:30.191735 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:45:34.020670 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:45:34.020769 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:46:15.106817 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:46:15.106920 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:46:23.063483 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:46:52.198674 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:46:52.198788 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:46:54.680300 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:47:33.406967 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:47:47.475283 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:47:47.475424 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:48:10.515221 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:48:18.527300 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:48:18.527408 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:48:51.287650 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:48:51.287755 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:48:53.109203 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:49:35.911922 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:49:38.193014 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:49:38.193111 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:50:12.789237 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:50:15.413518 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:50:15.413620 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:50:54.389295 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:50:59.640057 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:50:59.640144 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:51:26.774213 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:51:56.095640 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:51:56.095975 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:52:12.256369 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:52:53.815466 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:52:53.815623 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:53:03.612488 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:53:49.403788 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:53:49.403932 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:53:56.477046 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:54:26.548165 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:54:26.548280 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:54:41.341570 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:55:07.832445 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:55:07.832532 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:55:13.014640 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:55:47.079885 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:55:47.079970 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:55:49.078270 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:56:19.682121 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:56:30.897792 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:56:30.897892 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:57:05.484586 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:57:05.562953 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:57:05.563023 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:57:35.536972 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:57:44.357062 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:57:44.357162 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:58:28.173428 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:58:39.461173 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:58:39.461268 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:59:23.292890 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:59:38.184263 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:59:38.184356 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:00:03.867313 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:00:21.366377 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:00:21.366482 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:00:55.460831 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:01:06.711599 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:01:06.711697 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:01:31.407430 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:02:00.459136 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:02:00.459214 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:02:03.270815 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:02:48.444234 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:03:00.052123 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:03:00.052234 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:03:33.246147 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:03:34.102875 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:03:34.102970 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:04:07.079192 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:04:31.332850 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:04:31.332953 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:04:39.040789 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:05:20.586177 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:05:24.104295 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:05:24.104396 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:05:56.102583 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:05:56.102678 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:06:01.546419 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:06:52.519596 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:06:52.519707 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:06:57.187016 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:07:28.527027 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:07:28.527132 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:07:50.479790 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:08:02.238332 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:08:02.238442 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:08:23.285761 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:08:36.532831 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:08:36.532933 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:08:56.407100 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:09:15.565123 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:09:15.565227 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:09:51.188817 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:09:58.950328 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:09:58.950437 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:10:46.156298 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:10:57.007590 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:10:57.007717 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:11:40.984695 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:11:48.192896 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:11:48.192981 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:12:32.816272 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:12:40.484654 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:12:40.484771 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:13:18.584684 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:13:18.584799 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:13:24.734220 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:13:54.493284 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:13:54.493365 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:14:20.910143 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:14:45.561493 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:14:45.561600 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:15:14.405698 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:15:39.287716 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:15:39.287833 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:15:56.466032 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:16:29.609107 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:16:29.609210 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:16:50.688578 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:17:06.161246 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:17:06.161357 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:17:33.266403 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:17:58.956896 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:17:58.956995 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:18:07.112874 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:18:42.604049 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:18:42.604186 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:18:43.843581 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:19:35.297495 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:19:35.401678 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:19:35.401953 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:20:16.648558 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:20:31.396695 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:20:31.396800 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:21:06.636870 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:21:20.404310 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:21:20.404393 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:21:45.678095 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:22:07.317107 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:22:07.317221 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:22:38.317527 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:23:03.566396 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:23:03.566529 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:23:16.597958 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:23:38.775286 21 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:23:38.775398 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:24:03.301058 21 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" [TIMEDOUT] in [BeforeEach] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.862 I0922 01:24:12.896862 21 deploy.go:423] Unexpected error: delete ResourceSlices of the driver: <*errors.StatusError | 0xc00325ec80>: the server could not find the requested resource (delete resourceslices.resource.k8s.io) { ErrStatus: code: 404 details: causes: - message: 404 page not found reason: UnexpectedServerResponse group: resource.k8s.io kind: resourceslices message: the server could not find the requested resource (delete resourceslices.resource.k8s.io) metadata: {} reason: NotFound status: Failure, } [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:423 @ 09/22/25 01:24:12.897 STEP: Waiting for ResourceSlices of driver dra-6036.k8s.io to be removed... @ 09/22/25 01:24:12.897 [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:992 @ 09/22/25 01:24:12.901 I0922 01:24:12.903116 21 helper.go:125] Waiting up to 7m0s for all (but 0) nodes to be ready STEP: dump namespace information after failure @ 09/22/25 01:24:12.907 STEP: Collecting events from namespace "dra-6036". @ 09/22/25 01:24:12.907 STEP: Found 12 events. @ 09/22/25 01:24:12.91 I0922 01:24:12.910963 21 dump.go:53] At 2025-09-22 00:26:02 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-hj5qv I0922 01:24:12.910980 21 dump.go:53] At 2025-09-22 00:26:02 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-khrc2 I0922 01:24:12.910993 21 dump.go:53] At 2025-09-22 00:26:02 +0000 UTC - event for dra-test-driver-hj5qv: {default-scheduler } Scheduled: Successfully assigned dra-6036/dra-test-driver-hj5qv to latest-worker2 I0922 01:24:12.911005 21 dump.go:53] At 2025-09-22 00:26:02 +0000 UTC - event for dra-test-driver-khrc2: {default-scheduler } Scheduled: Successfully assigned dra-6036/dra-test-driver-khrc2 to latest-worker I0922 01:24:12.911034 21 dump.go:53] At 2025-09-22 00:26:03 +0000 UTC - event for dra-test-driver-hj5qv: {kubelet latest-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine I0922 01:24:12.911056 21 dump.go:53] At 2025-09-22 00:26:03 +0000 UTC - event for dra-test-driver-hj5qv: {kubelet latest-worker2} Created: Created container: pause I0922 01:24:12.911068 21 dump.go:53] At 2025-09-22 00:26:03 +0000 UTC - event for dra-test-driver-hj5qv: {kubelet latest-worker2} Started: Started container pause I0922 01:24:12.911080 21 dump.go:53] At 2025-09-22 00:26:03 +0000 UTC - event for dra-test-driver-khrc2: {kubelet latest-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine I0922 01:24:12.911091 21 dump.go:53] At 2025-09-22 00:26:03 +0000 UTC - event for dra-test-driver-khrc2: {kubelet latest-worker} Created: Created container: pause I0922 01:24:12.911099 21 dump.go:53] At 2025-09-22 00:26:03 +0000 UTC - event for dra-test-driver-khrc2: {kubelet latest-worker} Started: Started container pause I0922 01:24:12.911109 21 dump.go:53] At 2025-09-22 01:24:12 +0000 UTC - event for dra-test-driver-hj5qv: {kubelet latest-worker2} Killing: Stopping container pause I0922 01:24:12.911139 21 dump.go:53] At 2025-09-22 01:24:12 +0000 UTC - event for dra-test-driver-khrc2: {kubelet latest-worker} Killing: Stopping container pause I0922 01:24:12.913981 21 resource.go:151] POD NODE PHASE GRACE CONDITIONS I0922 01:24:12.914041 21 resource.go:158] dra-test-driver-hj5qv latest-worker2 Running 30s [{PodReadyToStartContainers 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:04 +0000 UTC } {Initialized 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:02 +0000 UTC } {Ready 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:04 +0000 UTC } {ContainersReady 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:04 +0000 UTC } {PodScheduled 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:02 +0000 UTC }] I0922 01:24:12.914117 21 resource.go:158] dra-test-driver-khrc2 latest-worker Running 30s [{PodReadyToStartContainers 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:03 +0000 UTC } {Initialized 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:02 +0000 UTC } {Ready 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:03 +0000 UTC } {ContainersReady 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:03 +0000 UTC } {PodScheduled 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:02 +0000 UTC }] I0922 01:24:12.914135 21 resource.go:161] I0922 01:24:13.006093 21 dump.go:109] Logging node info for node latest-control-plane I0922 01:24:13.010984 21 dump.go:114] Node Info: &Node{ObjectMeta:{latest-control-plane 4a88f68e-a233-4137-96e6-a4d02b8e46e1 5988996 0 2025-08-14 10:01:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2025-08-14 10:01:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2025-08-14 10:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2025-08-14 10:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2025-09-22 01:23:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c7fda80d8f9641588679c7e1084a869f,SystemUUID:634ef398-8dbd-484b-881c-6c5a7cef3a32,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c registry.k8s.io/pause:3.10.1],SizeBytes:320448,},ContainerImage{Names:[registry.k8s.io/pause:3.10],SizeBytes:320368,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.011047 21 dump.go:116] Logging kubelet events for node latest-control-plane I0922 01:24:13.014124 21 dump.go:121] Logging pods the kubelet thinks are on node latest-control-plane I0922 01:24:13.032163 21 dump.go:128] kube-system/kube-apiserver-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.032187 21 dump.go:134] Container kube-apiserver ready: true, restart count 0 I0922 01:24:13.032212 21 dump.go:128] kube-system/kube-controller-manager-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.032221 21 dump.go:134] Container kube-controller-manager ready: true, restart count 0 I0922 01:24:13.032233 21 dump.go:128] kube-system/kube-scheduler-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.032243 21 dump.go:134] Container kube-scheduler ready: true, restart count 0 I0922 01:24:13.032254 21 dump.go:128] kube-system/kindnet-qc9kz started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.032272 21 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.032283 21 dump.go:128] kube-system/coredns-674b8bbfcf-klpkw started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.032292 21 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.032301 21 dump.go:128] local-path-storage/local-path-provisioner-7dc846544d-kfzml started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.032311 21 dump.go:134] Container local-path-provisioner ready: true, restart count 0 I0922 01:24:13.032321 21 dump.go:128] kube-system/kube-proxy-pmhrl started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.032336 21 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.032349 21 dump.go:128] kube-system/create-loop-devs-mbrb2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.032359 21 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.032370 21 dump.go:128] kube-system/coredns-674b8bbfcf-2h8qn started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.032379 21 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.032392 21 dump.go:128] kube-system/etcd-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.032401 21 dump.go:134] Container etcd ready: true, restart count 0 I0922 01:24:13.110965 21 kubelet_metrics.go:206] Latency metrics for node latest-control-plane I0922 01:24:13.111022 21 dump.go:109] Logging node info for node latest-worker I0922 01:24:13.116417 21 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker 8c88dec8-b208-4951-9edb-9daf6e60cfed 5988936 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-15 02:53:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:23:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7df88837db5436386878b8262a14e32,SystemUUID:7f723078-9e20-4c29-be13-6b3e065907c6,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:c0872aae4791ff427e6eda52769afa04f17b5cf756f8267e0d52774c99d5c9de docker.io/library/docker:dind],SizeBytes:145422022,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:9d12cad5c4a1a4c1a853947ae4cbf31a860f98bb450173fee644d8e63ce6ea4d registry.k8s.io/build-image/distroless-iptables:v0.7.7],SizeBytes:11558416,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.116496 21 dump.go:116] Logging kubelet events for node latest-worker I0922 01:24:13.119431 21 dump.go:121] Logging pods the kubelet thinks are on node latest-worker I0922 01:24:13.135699 21 dump.go:128] dra-2483/dra-test-driver-9mlzl started at 2025-09-22 00:28:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135731 21 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135753 21 dump.go:128] kube-system/create-loop-devs-76ng2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135771 21 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.135790 21 dump.go:128] dra-3384/dra-test-driver-b2v6n started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135837 21 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135864 21 dump.go:128] dra-3118/dra-test-driver-2r9tq started at 2025-09-22 00:27:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135881 21 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135899 21 dump.go:128] dra-3361/dra-test-driver-vwd92 started at 2025-09-22 00:24:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135916 21 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135937 21 dump.go:128] dra-6036/dra-test-driver-khrc2 started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135961 21 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135980 21 dump.go:128] dra-1198/dra-test-driver-lvq9b started at 2025-09-22 00:24:19 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135996 21 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.136015 21 dump.go:128] dra-9101/dra-test-driver-drd6r started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.136030 21 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.136052 21 dump.go:128] kube-system/kube-proxy-plcq4 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.136068 21 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.136086 21 dump.go:128] kube-system/kindnet-kcb5w started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.136102 21 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.823779 21 kubelet_metrics.go:206] Latency metrics for node latest-worker I0922 01:24:13.823826 21 dump.go:109] Logging node info for node latest-worker2 I0922 01:24:13.827442 21 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker2 3d1774a3-1fed-4ae4-9649-579167a82d96 5988882 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-08 02:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:53b074234c804259ae0ff2cba6a6f050,SystemUUID:0c517cb0-5dd6-4327-a0bd-07b70f744e34,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:831644212c5bdd0b3362b5855c87b980ea39a83c9e9adcef2ce03eced99a737a docker.io/library/docker:dind],SizeBytes:148417865,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.827501 21 dump.go:116] Logging kubelet events for node latest-worker2 I0922 01:24:13.830292 21 dump.go:121] Logging pods the kubelet thinks are on node latest-worker2 I0922 01:24:13.843601 21 dump.go:128] dra-6036/dra-test-driver-hj5qv started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.843642 21 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.843661 21 dump.go:128] kube-system/create-loop-devs-pw6dk started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.843676 21 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.843693 21 dump.go:128] dra-9088/dra-test-driver-lpp9v started at 2025-09-22 00:25:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.843707 21 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.843723 21 dump.go:128] kube-system/kindnet-pthkx started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.843737 21 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.843753 21 dump.go:128] dra-9101/dra-test-driver-8pr5d started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.843767 21 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.843782 21 dump.go:128] kube-system/kube-proxy-splj6 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.843795 21 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.843822 21 dump.go:128] dra-9212/dra-test-driver-stcxz started at 2025-09-22 00:29:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.843835 21 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:14.447309 21 kubelet_metrics.go:206] Latency metrics for node latest-worker2 STEP: Destroying namespace "dra-6036" for this suite. @ 09/22/25 01:24:14.447 << Timeline [TIMEDOUT] A suite timeout occurred In [BeforeEach] at: k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.862 This is the Progress Report generated when the suite timeout occurred: [sig-node] [DRA] control plane [ConformanceCandidate] with node-local resources uses all resources (Spec Runtime: 58m10.253s) k8s.io/kubernetes/test/e2e/dra/dra.go:1027 In [BeforeEach] (Node Runtime: 58m10.193s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 At [By Step] deploying driver dra-6036.k8s.io on nodes [latest-worker latest-worker2] (Step Runtime: 58m10.193s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:397 Spec Goroutine goroutine 8134 [select] k8s.io/dynamic-resource-allocation/resourceslice.(*Controller).initInformer(0xc00421a900, {0x5a48ba8, 0xc0021ab360}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:537 k8s.io/dynamic-resource-allocation/resourceslice.newController({0x5a48b70?, 0xc003249680?}, {{0xc0041cda50, 0xf}, {0x5aa1d18, 0xc0049816c0}, 0xc001379040, 0xc00016f2d8, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:419 k8s.io/dynamic-resource-allocation/resourceslice.StartController({0x5a48b70, 0xc003249680}, {{0xc0041cda50, 0xf}, {0x5aa1d18, 0xc0049816c0}, 0xc001379040, 0xc00016f2d8, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:179 k8s.io/dynamic-resource-allocation/kubeletplugin.(*Helper).PublishResources(0xc003f00ea0, {0xc003249170?, 0x5a3c470?}, {0xc002e210e0?}) k8s.io/dynamic-resource-allocation/kubeletplugin/draplugin.go:773 > k8s.io/kubernetes/test/e2e/dra/test-driver/app.StartPlugin({0x5a48b70, 0xc003249170}, {0x51bc291, 0x4}, {0xc0041cda50, 0xf}, {0x5aa1d18, 0xc0049816c0}, {0xc003b61af0, 0xd}, ...) k8s.io/kubernetes/test/e2e/dra/test-driver/app/kubeletplugin.go:223 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).SetUp(0xc0048368f0, 0xc0007502d0, 0xc002c43410) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:615 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).Run(0xc0048368f0, 0x0?, 0xc003bb77d0?) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:323 > k8s.io/kubernetes/test/e2e/dra/utils.NewDriver.func1() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:292 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x5a062e0?, 0xc0021c85d0?}) github.com/onsi/ginkgo/v2@v2.21.0/internal/node.go:472 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:894 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode in goroutine 7483 github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:881 Goroutines of Interest goroutine 8153 [sync.Cond.Wait, 59 minutes] sync.runtime_notifyListWait(0xc00260dba8, 0x0) runtime/sema.go:597 sync.(*Cond).Wait(0x51dbe59?) sync/cond.go:71 k8s.io/client-go/tools/cache.(*RealFIFO).Pop(0xc00260db80, 0xc000ffe600) k8s.io/client-go/tools/cache/the_real_fifo.go:207 k8s.io/client-go/tools/cache.(*controller).processLoop(0xc001e21290, {0x5a48dd8, 0xc0028f49a0}) k8s.io/client-go/tools/cache/controller.go:211 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x5a48dd8?, 0xc0028f49a0?}, 0xc0028e2d80?) k8s.io/apimachinery/pkg/util/wait/backoff.go:255 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x5a48dd8, 0xc0028f49a0}, 0xc00224fdb8, {0x5a06ae0, 0xc0028e2d80}, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:256 k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x5a48dd8, 0xc0028f49a0}, 0xc00224fdb8, 0x3b9aca00, 0x0, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:223 k8s.io/apimachinery/pkg/util/wait.UntilWithContext(...) k8s.io/apimachinery/pkg/util/wait/backoff.go:172 k8s.io/client-go/tools/cache.(*controller).RunWithContext(0xc001e21290, {0x5a48dd8, 0xc0028f49a0}) k8s.io/client-go/tools/cache/controller.go:183 k8s.io/client-go/tools/cache.(*sharedIndexInformer).RunWithContext(0xc003e96840, {0x5a48dd8, 0xc0028f49a0}) k8s.io/client-go/tools/cache/shared_informer.go:587 k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc0027aac30?, 0xc002199c20?) k8s.io/client-go/tools/cache/shared_informer.go:526 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init.func7() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init in goroutine 8151 k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:211 goroutine 8176 [chan receive, 59 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc000883a10) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc0013c5800, {0x5a3c4a0, 0xc000883a10}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 8134 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 goroutine 8175 [chan receive, 59 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc000883998) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc0013c5600, {0x5a3c4a0, 0xc000883998}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 8134 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [TIMEDOUT] [3558.367 seconds] [sig-node] [DRA] control plane [ConformanceCandidate] [BeforeEach] supports inline claim referenced by multiple containers [sig-node, DRA, ConformanceCandidate] [BeforeEach] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [It] k8s.io/kubernetes/test/e2e/dra/dra.go:856 Timeline >> STEP: Creating a kubernetes client @ 09/22/25 00:24:56.103 I0922 00:24:56.103949 17 util.go:454] >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename dra @ 09/22/25 00:24:56.105 STEP: Waiting for a default service account to be provisioned in namespace @ 09/22/25 00:24:56.115 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 09/22/25 00:24:56.119 STEP: selecting nodes @ 09/22/25 00:24:56.124 I0922 00:24:56.167243 17 deploy.go:142] testing on nodes [latest-worker] STEP: deploying driver dra-3361.k8s.io on nodes [latest-worker] @ 09/22/25 00:24:56.167 I0922 00:24:56.173332 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:24:56.173428 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:24:56.226305 17 create.go:156] creating *v1.ReplicaSet: dra-3361/dra-test-driver I0922 00:24:57.367697 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:24:57.367768 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:24:58.251913 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:24:59.080237 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:24:59.366050 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:24:59.366133 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:25:01.774314 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:25:05.241764 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:25:05.241921 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:25:07.358332 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:25:11.842042 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:25:11.842150 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:25:19.600497 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:25:35.339329 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:25:35.339438 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:25:42.083489 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:26:23.220976 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:26:23.221097 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:26:32.484233 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:26:56.493164 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:26:56.493282 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:27:27.997038 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:27:31.248780 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:27:31.248888 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:28:08.330687 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:28:08.330784 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:28:11.968742 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:28:53.656858 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:28:53.656986 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:29:09.484268 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:29:29.651534 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:29:29.651633 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:29:47.175716 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:30:23.454960 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:30:23.455087 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:30:34.984805 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:31:14.764935 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:31:22.111612 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:31:22.111722 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:31:47.947119 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:31:54.722290 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:31:54.722392 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:32:34.412653 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:32:36.702762 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:32:36.702864 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:33:18.464969 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:33:25.106816 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:33:25.106918 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:34:12.867311 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:34:18.967737 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:34:18.967882 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:34:46.310172 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:35:01.595783 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:35:01.595905 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:35:32.509830 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:35:35.181763 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:35:35.181870 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:36:10.819120 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:36:10.819218 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:36:14.581852 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:37:00.642101 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:37:00.642223 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:37:09.020764 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:37:31.978091 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:37:31.978173 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:37:42.214213 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:38:14.428585 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:38:14.543752 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:38:14.543867 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:38:51.552220 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:38:54.398024 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:38:54.398108 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:39:29.496443 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:39:29.496544 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:39:32.735478 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:40:01.057222 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:40:01.057301 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:40:13.126535 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:40:40.617424 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:40:40.617531 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:41:09.918786 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:41:35.078826 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:41:35.078923 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:41:50.290861 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:42:05.369518 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:42:05.369620 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:42:24.779967 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:42:43.414617 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:42:43.414718 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:43:02.109948 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:43:15.360277 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:43:15.360382 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:43:39.549483 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:43:54.033893 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:43:54.033992 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:44:33.464201 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:44:38.019930 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:44:38.020032 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:45:23.824025 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:45:23.824130 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:45:31.666836 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:46:10.218972 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:46:10.219081 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:46:31.600422 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:47:04.476409 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:47:09.282136 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:47:09.282235 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:47:55.247698 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:48:06.915025 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:48:06.915135 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:48:32.911642 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:48:40.546145 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:48:40.546241 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:49:19.495553 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:49:29.502899 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:49:29.502994 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:50:06.191769 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:50:06.191909 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:50:11.755892 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:50:46.115594 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:50:46.682218 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:50:46.682306 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:51:17.836349 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:51:33.851570 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:51:33.851677 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:52:06.684080 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:52:06.684158 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:52:10.519678 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:52:43.659129 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:52:59.471042 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:52:59.471155 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:53:41.927010 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:53:50.405979 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:53:50.406117 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:54:32.321897 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:54:35.898276 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:54:35.898414 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:55:14.660913 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:55:32.875717 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:55:32.875863 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:55:49.548912 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:56:22.364823 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:56:22.364974 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:56:48.817432 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:57:13.759135 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:57:13.759223 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:57:35.640612 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:58:08.314933 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:58:11.669172 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:58:11.669282 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:58:51.863088 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:58:51.863209 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:59:03.260616 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:59:25.840983 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:59:25.841085 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:59:38.880298 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:59:59.030357 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:59:59.030464 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:00:25.088543 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:00:38.158263 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:00:38.158369 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:01:03.094563 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:01:25.688966 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:01:25.689069 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:01:41.993750 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:02:07.408856 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:02:07.408967 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:02:17.222321 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:02:45.486251 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:02:45.486359 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:03:06.739437 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:03:37.669536 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:03:37.669633 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:03:57.054898 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:04:34.269612 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:04:34.269773 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:04:50.616319 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:05:13.330093 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:05:13.330194 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:05:30.602417 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:05:46.962889 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:05:46.963005 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:06:19.792813 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:06:19.792917 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:06:25.644602 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:07:05.997741 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:07:05.997845 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:07:10.018047 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:07:42.243208 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:07:55.052859 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:07:55.052960 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:08:40.318957 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:08:51.595735 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:08:51.595853 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:09:29.434545 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:09:37.401477 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:09:37.401583 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:09:59.869240 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:10:15.086282 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:10:15.086396 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:10:54.693885 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:10:57.812883 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:10:57.812995 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:11:34.747264 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:11:34.747358 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:11:42.169115 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:12:10.410034 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:12:10.410143 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:12:35.311729 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:12:57.297742 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:12:57.297844 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:13:32.824606 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:13:42.786926 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:13:42.787041 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:14:21.556126 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:14:37.605334 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:14:37.605437 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:15:05.299059 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:15:24.512896 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:15:24.513000 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:15:57.307400 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:16:03.156537 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:16:03.156635 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:16:37.338846 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:16:39.116236 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:16:39.116341 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:17:16.240465 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:17:16.240558 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:17:27.036458 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:17:54.186239 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:17:54.186342 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:18:16.431080 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:18:29.318029 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:18:29.318134 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:19:14.651382 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:19:27.484059 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:19:27.484157 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:19:51.408309 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:20:25.599681 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:20:25.599845 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:20:36.469461 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:21:10.255511 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:21:10.255613 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:21:20.146761 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:21:55.458292 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:21:55.458399 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:22:12.065235 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:22:38.683060 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:22:38.683176 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:23:02.512395 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:23:13.224745 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:23:13.225098 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:23:48.888193 17 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:23:48.888347 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:24:00.092490 17 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" [TIMEDOUT] in [BeforeEach] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.847 I0922 01:24:12.882865 17 deploy.go:423] Unexpected error: delete ResourceSlices of the driver: <*errors.StatusError | 0xc00100adc0>: the server could not find the requested resource (delete resourceslices.resource.k8s.io) { ErrStatus: code: 404 details: causes: - message: 404 page not found reason: UnexpectedServerResponse group: resource.k8s.io kind: resourceslices message: the server could not find the requested resource (delete resourceslices.resource.k8s.io) metadata: {} reason: NotFound status: Failure, } [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:423 @ 09/22/25 01:24:12.883 STEP: Waiting for ResourceSlices of driver dra-3361.k8s.io to be removed... @ 09/22/25 01:24:12.883 [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:992 @ 09/22/25 01:24:12.888 I0922 01:24:12.890195 17 helper.go:125] Waiting up to 7m0s for all (but 0) nodes to be ready STEP: dump namespace information after failure @ 09/22/25 01:24:12.906 STEP: Collecting events from namespace "dra-3361". @ 09/22/25 01:24:12.906 STEP: Found 6 events. @ 09/22/25 01:24:12.909 I0922 01:24:12.909753 17 dump.go:53] At 2025-09-22 00:24:56 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-vwd92 I0922 01:24:12.909770 17 dump.go:53] At 2025-09-22 00:24:56 +0000 UTC - event for dra-test-driver-vwd92: {default-scheduler } Scheduled: Successfully assigned dra-3361/dra-test-driver-vwd92 to latest-worker I0922 01:24:12.909793 17 dump.go:53] At 2025-09-22 00:24:56 +0000 UTC - event for dra-test-driver-vwd92: {kubelet latest-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine I0922 01:24:12.909804 17 dump.go:53] At 2025-09-22 00:24:56 +0000 UTC - event for dra-test-driver-vwd92: {kubelet latest-worker} Created: Created container: pause I0922 01:24:12.909814 17 dump.go:53] At 2025-09-22 00:24:57 +0000 UTC - event for dra-test-driver-vwd92: {kubelet latest-worker} Started: Started container pause I0922 01:24:12.909823 17 dump.go:53] At 2025-09-22 01:24:12 +0000 UTC - event for dra-test-driver-vwd92: {kubelet latest-worker} Killing: Stopping container pause I0922 01:24:12.912842 17 resource.go:151] POD NODE PHASE GRACE CONDITIONS I0922 01:24:12.912917 17 resource.go:158] dra-test-driver-vwd92 latest-worker Running 30s [{PodReadyToStartContainers 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:24:57 +0000 UTC } {Initialized 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:24:56 +0000 UTC } {Ready 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:24:57 +0000 UTC } {ContainersReady 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:24:57 +0000 UTC } {PodScheduled 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:24:56 +0000 UTC }] I0922 01:24:12.912929 17 resource.go:161] I0922 01:24:13.005937 17 dump.go:109] Logging node info for node latest-control-plane I0922 01:24:13.010529 17 dump.go:114] Node Info: &Node{ObjectMeta:{latest-control-plane 4a88f68e-a233-4137-96e6-a4d02b8e46e1 5988996 0 2025-08-14 10:01:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2025-08-14 10:01:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2025-08-14 10:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2025-08-14 10:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2025-09-22 01:23:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c7fda80d8f9641588679c7e1084a869f,SystemUUID:634ef398-8dbd-484b-881c-6c5a7cef3a32,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c registry.k8s.io/pause:3.10.1],SizeBytes:320448,},ContainerImage{Names:[registry.k8s.io/pause:3.10],SizeBytes:320368,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.010862 17 dump.go:116] Logging kubelet events for node latest-control-plane I0922 01:24:13.013939 17 dump.go:121] Logging pods the kubelet thinks are on node latest-control-plane I0922 01:24:13.035761 17 dump.go:128] kube-system/kindnet-qc9kz started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035827 17 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.035870 17 dump.go:128] kube-system/coredns-674b8bbfcf-klpkw started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035895 17 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.035916 17 dump.go:128] local-path-storage/local-path-provisioner-7dc846544d-kfzml started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035935 17 dump.go:134] Container local-path-provisioner ready: true, restart count 0 I0922 01:24:13.035970 17 dump.go:128] kube-system/kube-proxy-pmhrl started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.035988 17 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.036008 17 dump.go:128] kube-system/create-loop-devs-mbrb2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.036026 17 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.036083 17 dump.go:128] kube-system/coredns-674b8bbfcf-2h8qn started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.036100 17 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.036126 17 dump.go:128] kube-system/etcd-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.036155 17 dump.go:134] Container etcd ready: true, restart count 0 I0922 01:24:13.036189 17 dump.go:128] kube-system/kube-apiserver-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.036222 17 dump.go:134] Container kube-apiserver ready: true, restart count 0 I0922 01:24:13.036242 17 dump.go:128] kube-system/kube-controller-manager-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.036259 17 dump.go:134] Container kube-controller-manager ready: true, restart count 0 I0922 01:24:13.036279 17 dump.go:128] kube-system/kube-scheduler-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.036310 17 dump.go:134] Container kube-scheduler ready: true, restart count 0 I0922 01:24:13.110603 17 kubelet_metrics.go:206] Latency metrics for node latest-control-plane I0922 01:24:13.110641 17 dump.go:109] Logging node info for node latest-worker I0922 01:24:13.116338 17 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker 8c88dec8-b208-4951-9edb-9daf6e60cfed 5988936 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-15 02:53:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:23:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7df88837db5436386878b8262a14e32,SystemUUID:7f723078-9e20-4c29-be13-6b3e065907c6,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:c0872aae4791ff427e6eda52769afa04f17b5cf756f8267e0d52774c99d5c9de docker.io/library/docker:dind],SizeBytes:145422022,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:9d12cad5c4a1a4c1a853947ae4cbf31a860f98bb450173fee644d8e63ce6ea4d registry.k8s.io/build-image/distroless-iptables:v0.7.7],SizeBytes:11558416,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.116394 17 dump.go:116] Logging kubelet events for node latest-worker I0922 01:24:13.119760 17 dump.go:121] Logging pods the kubelet thinks are on node latest-worker I0922 01:24:13.135037 17 dump.go:128] dra-3384/dra-test-driver-b2v6n started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135082 17 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135119 17 dump.go:128] dra-3118/dra-test-driver-2r9tq started at 2025-09-22 00:27:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135151 17 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135184 17 dump.go:128] dra-3361/dra-test-driver-vwd92 started at 2025-09-22 00:24:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135210 17 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135240 17 dump.go:128] dra-6036/dra-test-driver-khrc2 started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135260 17 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135288 17 dump.go:128] dra-1198/dra-test-driver-lvq9b started at 2025-09-22 00:24:19 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135313 17 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135341 17 dump.go:128] dra-9101/dra-test-driver-drd6r started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135366 17 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135392 17 dump.go:128] kube-system/kube-proxy-plcq4 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135425 17 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.135455 17 dump.go:128] kube-system/kindnet-kcb5w started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135487 17 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.135524 17 dump.go:128] dra-2483/dra-test-driver-9mlzl started at 2025-09-22 00:28:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135555 17 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.135591 17 dump.go:128] kube-system/create-loop-devs-76ng2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.135618 17 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.838066 17 kubelet_metrics.go:206] Latency metrics for node latest-worker I0922 01:24:13.838095 17 dump.go:109] Logging node info for node latest-worker2 I0922 01:24:13.842849 17 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker2 3d1774a3-1fed-4ae4-9649-579167a82d96 5988882 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-08 02:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:53b074234c804259ae0ff2cba6a6f050,SystemUUID:0c517cb0-5dd6-4327-a0bd-07b70f744e34,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:831644212c5bdd0b3362b5855c87b980ea39a83c9e9adcef2ce03eced99a737a docker.io/library/docker:dind],SizeBytes:148417865,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.842902 17 dump.go:116] Logging kubelet events for node latest-worker2 I0922 01:24:13.845670 17 dump.go:121] Logging pods the kubelet thinks are on node latest-worker2 I0922 01:24:13.857085 17 dump.go:128] dra-6036/dra-test-driver-hj5qv started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.857124 17 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.857148 17 dump.go:128] kube-system/create-loop-devs-pw6dk started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.857169 17 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.857192 17 dump.go:128] dra-9088/dra-test-driver-lpp9v started at 2025-09-22 00:25:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.857211 17 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.857230 17 dump.go:128] kube-system/kindnet-pthkx started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.857246 17 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.857268 17 dump.go:128] dra-9101/dra-test-driver-8pr5d started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.857287 17 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.857308 17 dump.go:128] kube-system/kube-proxy-splj6 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.857327 17 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.857348 17 dump.go:128] dra-9212/dra-test-driver-stcxz started at 2025-09-22 00:29:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.857366 17 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:14.461354 17 kubelet_metrics.go:206] Latency metrics for node latest-worker2 STEP: Destroying namespace "dra-3361" for this suite. @ 09/22/25 01:24:14.464 << Timeline [TIMEDOUT] A suite timeout occurred In [BeforeEach] at: k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.847 This is the Progress Report generated when the suite timeout occurred: [sig-node] [DRA] control plane [ConformanceCandidate] supports inline claim referenced by multiple containers (Spec Runtime: 59m16.744s) k8s.io/kubernetes/test/e2e/dra/dra.go:856 In [BeforeEach] (Node Runtime: 59m16.68s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 At [By Step] deploying driver dra-3361.k8s.io on nodes [latest-worker] (Step Runtime: 59m16.68s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:397 Spec Goroutine goroutine 7873 [select] k8s.io/dynamic-resource-allocation/resourceslice.(*Controller).initInformer(0xc003b7ca80, {0x5a48ba8, 0xc003501cc0}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:537 k8s.io/dynamic-resource-allocation/resourceslice.newController({0x5a48b70?, 0xc00698fa40?}, {{0xc004318550, 0xf}, {0x5aa1d18, 0xc0058ad180}, 0xc001b863c0, 0xc005b21f38, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:419 k8s.io/dynamic-resource-allocation/resourceslice.StartController({0x5a48b70, 0xc00698fa40}, {{0xc004318550, 0xf}, {0x5aa1d18, 0xc0058ad180}, 0xc001b863c0, 0xc005b21f38, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:179 k8s.io/dynamic-resource-allocation/kubeletplugin.(*Helper).PublishResources(0xc001561930, {0xc00698f530?, 0x5a3c470?}, {0xc0067ef320?}) k8s.io/dynamic-resource-allocation/kubeletplugin/draplugin.go:773 > k8s.io/kubernetes/test/e2e/dra/test-driver/app.StartPlugin({0x5a48b70, 0xc00698f530}, {0x51bc291, 0x4}, {0xc004318550, 0xf}, {0x5aa1d18, 0xc0058ad180}, {0xc0044d47d0, 0xd}, ...) k8s.io/kubernetes/test/e2e/dra/test-driver/app/kubeletplugin.go:223 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).SetUp(0xc000bd33f0, 0xc0013a0780, 0xc0065d2330) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:615 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).Run(0xc000bd33f0, 0xc0019e4f68?, 0x46caa74?) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:323 > k8s.io/kubernetes/test/e2e/dra/utils.NewDriver.func1() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:292 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x22221f6?, 0xc006718300?}) github.com/onsi/ginkgo/v2@v2.21.0/internal/node.go:472 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:894 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode in goroutine 7537 github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:881 Goroutines of Interest goroutine 7891 [sync.Cond.Wait, 59 minutes] sync.runtime_notifyListWait(0xc000c331a8, 0x0) runtime/sema.go:597 sync.(*Cond).Wait(0x51dbe59?) sync/cond.go:71 k8s.io/client-go/tools/cache.(*RealFIFO).Pop(0xc000c33180, 0xc0018d8dc0) k8s.io/client-go/tools/cache/the_real_fifo.go:207 k8s.io/client-go/tools/cache.(*controller).processLoop(0xc001be4dc0, {0x5a48dd8, 0xc001e96700}) k8s.io/client-go/tools/cache/controller.go:211 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x5a48dd8?, 0xc001e96700?}, 0xc0060712c0?) k8s.io/apimachinery/pkg/util/wait/backoff.go:255 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x5a48dd8, 0xc001e96700}, 0xc006229db8, {0x5a06ae0, 0xc0060712c0}, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:256 k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x5a48dd8, 0xc001e96700}, 0xc006229db8, 0x3b9aca00, 0x0, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:223 k8s.io/apimachinery/pkg/util/wait.UntilWithContext(...) k8s.io/apimachinery/pkg/util/wait/backoff.go:172 k8s.io/client-go/tools/cache.(*controller).RunWithContext(0xc001be4dc0, {0x5a48dd8, 0xc001e96700}) k8s.io/client-go/tools/cache/controller.go:183 k8s.io/client-go/tools/cache.(*sharedIndexInformer).RunWithContext(0xc006626420, {0x5a48dd8, 0xc001e96700}) k8s.io/client-go/tools/cache/shared_informer.go:587 k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc0033e08c0?, 0xc006499540?) k8s.io/client-go/tools/cache/shared_informer.go:526 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init.func7() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init in goroutine 7841 k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:211 goroutine 7932 [chan receive, 59 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc004c007e0) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc006035400, {0x5a3c4a0, 0xc004c007e0}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 7873 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 goroutine 7933 [chan receive, 59 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc004c00858) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc006035600, {0x5a3c4a0, 0xc004c00858}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 7873 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ SSSS ------------------------------ • [TIMEDOUT] [3331.885 seconds] [sig-node] [DRA] ResourceSlice Controller [It] creates slices [ConformanceCandidate] [sig-node, DRA, ConformanceCandidate] k8s.io/kubernetes/test/e2e/dra/dra.go:2084 Timeline >> STEP: Creating a kubernetes client @ 09/22/25 00:28:42.588 I0922 00:28:42.588383 32 util.go:454] >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename dra @ 09/22/25 00:28:42.59 STEP: Waiting for a default service account to be provisioned in namespace @ 09/22/25 00:28:42.599 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 09/22/25 00:28:42.603 STEP: Creating slices @ 09/22/25 00:28:42.948 E0922 00:28:42.955373 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:28:44.424562 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:28:47.198754 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:28:50.987128 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:28:57.970279 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:29:19.333186 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:29:50.540103 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:30:27.862901 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:31:21.269340 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:32:09.733147 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:33:00.088624 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:33:43.927737 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:34:23.458938 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:35:07.549624 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:36:04.641120 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:37:04.420155 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:37:48.116750 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:38:47.684977 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:39:32.640080 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:40:26.185420 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:41:01.032625 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:41:44.602121 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:42:38.498194 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:43:31.885539 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:44:29.870037 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:45:01.268981 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:45:47.592348 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:46:19.229152 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:47:11.581473 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:47:50.606412 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:48:21.343080 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:49:14.127859 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:50:09.770093 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:50:53.149544 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:51:23.806681 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:52:06.591287 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:53:01.554818 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:53:39.036624 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:54:16.117971 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:55:03.125685 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:55:51.163661 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:56:33.728987 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:57:09.624571 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:57:57.907535 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:58:50.345259 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:59:35.599225 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:00:15.304888 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:01:07.075255 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:01:41.182649 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:02:40.873461 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:03:32.978429 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:04:21.663856 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:05:16.405027 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:06:08.521571 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:06:43.107961 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:07:24.393652 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:08:11.655879 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:08:52.217115 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:09:24.581652 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:10:20.061042 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:11:08.218417 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:11:52.117301 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:12:27.817531 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:12:58.533133 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:13:54.881728 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:14:51.176308 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:15:32.092358 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:16:07.259191 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:16:45.509555 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:17:43.219761 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:18:26.926752 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:19:18.665488 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:20:11.285891 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:20:59.989010 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:21:57.997234 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:22:54.602387 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:23:25.812746 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:24:09.098682 32 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" [TIMEDOUT] in [It] - k8s.io/kubernetes/test/e2e/dra/dra.go:2084 @ 09/22/25 01:24:12.833 I0922 01:24:12.835646 32 dra.go:2130] Unexpected error: start controller: <*fmt.wrapError | 0xc0042463e0>: create controller: sync ResourceSlice informer: suite timeout occurred { msg: "create controller: sync ResourceSlice informer: suite timeout occurred", err: <*fmt.wrapError | 0xc0042463c0>{ msg: "sync ResourceSlice informer: suite timeout occurred", err: <*errors.errorString | 0xc000c5b040>{ s: "suite timeout occurred", }, }, } [FAILED] in [It] - k8s.io/kubernetes/test/e2e/dra/dra.go:2130 @ 09/22/25 01:24:12.836 I0922 01:24:12.837347 32 helper.go:125] Waiting up to 7m0s for all (but 0) nodes to be ready STEP: dump namespace information after failure @ 09/22/25 01:24:12.905 STEP: Collecting events from namespace "dra-5617". @ 09/22/25 01:24:12.905 STEP: Found 0 events. @ 09/22/25 01:24:12.908 I0922 01:24:12.912044 32 resource.go:151] POD NODE PHASE GRACE CONDITIONS I0922 01:24:12.912067 32 resource.go:161] I0922 01:24:12.915292 32 dump.go:109] Logging node info for node latest-control-plane I0922 01:24:12.918271 32 dump.go:114] Node Info: &Node{ObjectMeta:{latest-control-plane 4a88f68e-a233-4137-96e6-a4d02b8e46e1 5988996 0 2025-08-14 10:01:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2025-08-14 10:01:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2025-08-14 10:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2025-08-14 10:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2025-09-22 01:23:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c7fda80d8f9641588679c7e1084a869f,SystemUUID:634ef398-8dbd-484b-881c-6c5a7cef3a32,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c registry.k8s.io/pause:3.10.1],SizeBytes:320448,},ContainerImage{Names:[registry.k8s.io/pause:3.10],SizeBytes:320368,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:12.918314 32 dump.go:116] Logging kubelet events for node latest-control-plane I0922 01:24:12.920876 32 dump.go:121] Logging pods the kubelet thinks are on node latest-control-plane I0922 01:24:12.942514 32 dump.go:128] kube-system/kindnet-qc9kz started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.942555 32 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:12.942569 32 dump.go:128] kube-system/coredns-674b8bbfcf-klpkw started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.942580 32 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:12.942603 32 dump.go:128] local-path-storage/local-path-provisioner-7dc846544d-kfzml started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.942615 32 dump.go:134] Container local-path-provisioner ready: true, restart count 0 I0922 01:24:12.942626 32 dump.go:128] kube-system/kube-proxy-pmhrl started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.942636 32 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:12.942647 32 dump.go:128] kube-system/create-loop-devs-mbrb2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.942657 32 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:12.942668 32 dump.go:128] kube-system/coredns-674b8bbfcf-2h8qn started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.942677 32 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:12.942688 32 dump.go:128] kube-system/etcd-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.942700 32 dump.go:134] Container etcd ready: true, restart count 0 I0922 01:24:12.942711 32 dump.go:128] kube-system/kube-apiserver-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.942720 32 dump.go:134] Container kube-apiserver ready: true, restart count 0 I0922 01:24:12.942731 32 dump.go:128] kube-system/kube-controller-manager-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.942740 32 dump.go:134] Container kube-controller-manager ready: true, restart count 0 I0922 01:24:12.942751 32 dump.go:128] kube-system/kube-scheduler-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:12.942760 32 dump.go:134] Container kube-scheduler ready: true, restart count 0 I0922 01:24:13.075235 32 kubelet_metrics.go:206] Latency metrics for node latest-control-plane I0922 01:24:13.075302 32 dump.go:109] Logging node info for node latest-worker I0922 01:24:13.080271 32 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker 8c88dec8-b208-4951-9edb-9daf6e60cfed 5988936 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-15 02:53:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:23:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7df88837db5436386878b8262a14e32,SystemUUID:7f723078-9e20-4c29-be13-6b3e065907c6,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:c0872aae4791ff427e6eda52769afa04f17b5cf756f8267e0d52774c99d5c9de docker.io/library/docker:dind],SizeBytes:145422022,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:9d12cad5c4a1a4c1a853947ae4cbf31a860f98bb450173fee644d8e63ce6ea4d registry.k8s.io/build-image/distroless-iptables:v0.7.7],SizeBytes:11558416,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.080357 32 dump.go:116] Logging kubelet events for node latest-worker I0922 01:24:13.083553 32 dump.go:121] Logging pods the kubelet thinks are on node latest-worker I0922 01:24:13.102333 32 dump.go:128] kube-system/kindnet-kcb5w started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102369 32 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.102393 32 dump.go:128] dra-2483/dra-test-driver-9mlzl started at 2025-09-22 00:28:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102412 32 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.102432 32 dump.go:128] kube-system/create-loop-devs-76ng2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102449 32 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.102473 32 dump.go:128] dra-3384/dra-test-driver-b2v6n started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102490 32 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.102509 32 dump.go:128] dra-3118/dra-test-driver-2r9tq started at 2025-09-22 00:27:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102526 32 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.102546 32 dump.go:128] dra-3361/dra-test-driver-vwd92 started at 2025-09-22 00:24:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102562 32 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.102587 32 dump.go:128] dra-6036/dra-test-driver-khrc2 started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102601 32 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.102620 32 dump.go:128] dra-1198/dra-test-driver-lvq9b started at 2025-09-22 00:24:19 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102636 32 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.102655 32 dump.go:128] dra-9101/dra-test-driver-drd6r started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102671 32 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.102690 32 dump.go:128] kube-system/kube-proxy-plcq4 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.102727 32 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.824803 32 kubelet_metrics.go:206] Latency metrics for node latest-worker I0922 01:24:13.824845 32 dump.go:109] Logging node info for node latest-worker2 I0922 01:24:13.828861 32 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker2 3d1774a3-1fed-4ae4-9649-579167a82d96 5988882 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-08 02:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:53b074234c804259ae0ff2cba6a6f050,SystemUUID:0c517cb0-5dd6-4327-a0bd-07b70f744e34,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:831644212c5bdd0b3362b5855c87b980ea39a83c9e9adcef2ce03eced99a737a docker.io/library/docker:dind],SizeBytes:148417865,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.829126 32 dump.go:116] Logging kubelet events for node latest-worker2 I0922 01:24:13.831735 32 dump.go:121] Logging pods the kubelet thinks are on node latest-worker2 I0922 01:24:13.844379 32 dump.go:128] dra-6036/dra-test-driver-hj5qv started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.844405 32 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.844416 32 dump.go:128] kube-system/create-loop-devs-pw6dk started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.844427 32 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.844454 32 dump.go:128] dra-9088/dra-test-driver-lpp9v started at 2025-09-22 00:25:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.844463 32 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.844480 32 dump.go:128] kube-system/kindnet-pthkx started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.844490 32 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.844502 32 dump.go:128] dra-9101/dra-test-driver-8pr5d started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.844511 32 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.844521 32 dump.go:128] kube-system/kube-proxy-splj6 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.844531 32 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.844544 32 dump.go:128] dra-9212/dra-test-driver-stcxz started at 2025-09-22 00:29:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.844553 32 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:14.467370 32 kubelet_metrics.go:206] Latency metrics for node latest-worker2 STEP: Destroying namespace "dra-5617" for this suite. @ 09/22/25 01:24:14.467 << Timeline [TIMEDOUT] A suite timeout occurred In [It] at: k8s.io/kubernetes/test/e2e/dra/dra.go:2084 @ 09/22/25 01:24:12.833 This is the Progress Report generated when the suite timeout occurred: [sig-node] [DRA] ResourceSlice Controller creates slices [ConformanceCandidate] (Spec Runtime: 55m30.246s) k8s.io/kubernetes/test/e2e/dra/dra.go:2084 In [It] (Node Runtime: 55m30.225s) k8s.io/kubernetes/test/e2e/dra/dra.go:2084 At [By Step] Creating slices (Step Runtime: 55m29.885s) k8s.io/kubernetes/test/e2e/dra/dra.go:2122 Spec Goroutine goroutine 9206 [select] k8s.io/dynamic-resource-allocation/resourceslice.(*Controller).initInformer(0xc0007f2840, {0x5a48ba8, 0xc004a1a050}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:537 k8s.io/dynamic-resource-allocation/resourceslice.newController({0x7f13f0cbf900?, 0xc0047ac9f0?}, {{0xc0042c4ac0, 0x8}, {0x5aa1d18, 0xc000e928c0}, 0x0, 0xc005177700, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:419 k8s.io/dynamic-resource-allocation/resourceslice.StartController({0x7f13f0cbf900, 0xc0047ac9f0}, {{0xc0042c4ac0, 0x8}, {0x5aa1d18, 0xc000e928c0}, 0x0, 0xc005177700, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:179 > k8s.io/kubernetes/test/e2e/dra.init.func1.18.1({0x7f13f0cbf900, 0xc0047ac9f0}) k8s.io/kubernetes/test/e2e/dra/dra.go:2124 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x5a52f78?, 0xc0047ac9f0?}) github.com/onsi/ginkgo/v2@v2.21.0/internal/node.go:465 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:894 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode in goroutine 7604 github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:881 [FAILED] A suite timeout occurred and then the following failure was recorded in the timedout node before it exited: start controller: create controller: sync ResourceSlice informer: suite timeout occurred In [It] at: k8s.io/kubernetes/test/e2e/dra/dra.go:2130 @ 09/22/25 01:24:12.836 ------------------------------ SS ------------------------------ • [TIMEDOUT] [3442.989 seconds] [sig-node] [DRA] control plane [ConformanceCandidate] with different ResourceSlices [BeforeEach] keeps pod pending because of CEL runtime errors [sig-node, DRA, ConformanceCandidate] [BeforeEach] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [It] k8s.io/kubernetes/test/e2e/dra/dra.go:968 Timeline >> STEP: Creating a kubernetes client @ 09/22/25 00:26:51.484 I0922 00:26:51.484583 34 util.go:454] >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename dra @ 09/22/25 00:26:51.486 STEP: Waiting for a default service account to be provisioned in namespace @ 09/22/25 00:26:51.495 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 09/22/25 00:26:51.499 STEP: selecting nodes @ 09/22/25 00:26:51.507 I0922 00:26:51.572995 34 deploy.go:142] testing on nodes [latest-worker latest-worker2] STEP: deploying driver dra-9101.k8s.io on nodes [latest-worker latest-worker2] @ 09/22/25 00:26:51.573 I0922 00:26:51.579177 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:26:51.579333 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:26:51.631214 34 create.go:156] creating *v1.ReplicaSet: dra-9101/dra-test-driver I0922 00:26:52.565029 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:26:52.565117 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:26:53.657258 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:26:55.256750 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:26:55.582569 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:26:55.582659 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:26:57.443449 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:26:59.299160 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:26:59.299269 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:27:00.850132 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:27:09.058234 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:27:10.055506 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:27:10.055628 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:27:26.953396 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:27:26.953497 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:27:33.882660 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:27:52.746171 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:27:52.746274 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:28:21.746330 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:28:36.615256 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:28:36.615344 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:29:03.278020 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:29:32.690087 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:29:32.690187 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:29:44.202948 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:30:17.757439 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:30:31.722759 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:30:31.722869 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:31:00.762277 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:31:03.031130 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:31:03.031234 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:31:59.165337 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:32:01.372226 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:32:01.372322 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:32:50.341477 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:32:50.341596 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:32:51.469490 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:33:33.266651 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:33:33.266749 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:33:34.911571 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:34:14.439783 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:34:14.439973 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:34:22.316023 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:35:08.648095 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:35:09.503689 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:35:09.503789 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:35:50.537356 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:35:58.860880 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:35:58.861010 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:36:32.036214 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:36:50.792752 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:36:50.792847 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:37:06.900797 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:37:47.472439 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:37:47.472539 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:37:58.383495 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:38:41.637604 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:38:41.637703 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:38:47.067971 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:39:17.318155 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:39:17.318262 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:39:17.946023 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:39:49.182344 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:39:49.182440 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:40:00.894618 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:40:35.445901 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:40:45.409667 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:40:45.409775 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:41:15.318516 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:41:24.963575 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:41:24.963681 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:41:59.317521 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:42:11.169806 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:42:11.169904 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:42:51.078769 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:42:56.912021 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:42:56.912118 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:43:43.156575 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:43:43.156650 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:43:50.522334 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:44:34.673019 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:44:34.673133 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:44:46.399776 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:45:31.400319 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:45:31.400417 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:45:42.479922 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:46:24.165168 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:46:24.165309 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:46:39.033415 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:47:17.800013 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:47:17.800114 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:47:18.716448 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:47:49.368675 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:48:14.575954 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:48:14.576070 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:48:37.384611 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:48:51.843451 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:48:51.843551 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:49:31.920204 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:49:48.630411 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:49:48.630518 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:50:21.634562 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:50:21.634665 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:50:23.409717 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:50:53.530266 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:51:10.276950 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:51:10.277079 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:51:40.885827 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:52:06.749085 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:52:06.749204 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:52:35.995364 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:52:47.019600 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:52:47.019716 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:53:11.629233 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:53:23.821071 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:53:23.821170 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:53:50.075030 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:54:19.519291 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:54:19.519482 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:54:20.116118 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:54:55.743282 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:54:55.743377 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:54:57.802774 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:55:39.615263 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:55:43.079191 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:55:43.079291 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:56:17.432069 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:56:17.432221 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:56:23.887646 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:56:50.955079 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:56:50.955183 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:57:19.665804 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:57:24.893751 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:57:24.893865 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:58:14.580196 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:58:22.809400 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:58:22.809542 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:58:50.743725 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:59:07.397390 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:59:07.397489 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:59:40.989588 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:00:05.718764 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:00:05.718879 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:00:26.755532 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:00:40.063768 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:00:40.063905 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:01:12.381913 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:01:12.382008 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:01:19.660646 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:01:55.986622 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:01:57.420899 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:01:57.421024 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:02:46.569322 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:02:47.745365 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:02:47.745470 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:03:21.255821 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:03:34.175608 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:03:34.175684 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:04:20.927623 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:04:23.491048 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:04:23.491144 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:05:13.777747 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:05:14.410232 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:05:14.410327 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:06:04.799864 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:06:04.913645 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:06:04.913709 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:06:39.606264 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:06:46.353836 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:06:46.353941 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:07:23.132594 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:07:26.119727 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:07:26.119864 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:08:05.120097 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:08:12.821189 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:08:12.821287 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:08:53.105461 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:08:55.037911 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:08:55.038006 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:09:37.018712 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:09:53.047974 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:09:53.048076 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:10:12.645617 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:10:38.135603 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:10:38.135897 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:11:01.692557 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:11:32.958165 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:11:32.958269 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:11:54.320050 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:12:26.349852 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:12:32.415006 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:12:32.415113 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:13:19.769416 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:13:29.435767 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:13:29.435930 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:14:17.559704 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:14:20.375729 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:14:20.375857 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:14:51.115097 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:14:57.969703 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:14:57.969798 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:15:28.072436 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:15:53.818765 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:15:53.818899 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:16:04.721314 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:16:50.865297 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:16:50.865374 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:16:55.007146 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:17:25.475748 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:17:25.475987 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:17:50.982678 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:18:01.967558 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:18:01.967692 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:18:34.392206 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:18:34.392311 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:18:48.092391 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:19:28.510843 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:19:28.510941 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:19:41.680110 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:20:11.815398 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:20:18.189926 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:20:18.190025 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:20:53.503132 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:20:59.386464 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:20:59.386566 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:21:50.580510 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:21:58.588002 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:21:58.588442 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:22:41.004307 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:22:47.193872 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:22:47.193974 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:23:22.241333 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:23:28.882261 34 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:23:28.882358 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:24:02.803447 34 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" [TIMEDOUT] in [BeforeEach] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.938 I0922 01:24:12.972379 34 deploy.go:423] Unexpected error: delete ResourceSlices of the driver: <*errors.StatusError | 0xc006b90f00>: the server could not find the requested resource (delete resourceslices.resource.k8s.io) { ErrStatus: code: 404 details: causes: - message: 404 page not found reason: UnexpectedServerResponse group: resource.k8s.io kind: resourceslices message: the server could not find the requested resource (delete resourceslices.resource.k8s.io) metadata: {} reason: NotFound status: Failure, } [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:423 @ 09/22/25 01:24:12.972 STEP: Waiting for ResourceSlices of driver dra-9101.k8s.io to be removed... @ 09/22/25 01:24:12.973 [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:992 @ 09/22/25 01:24:12.977 I0922 01:24:12.978964 34 helper.go:125] Waiting up to 7m0s for all (but 0) nodes to be ready STEP: dump namespace information after failure @ 09/22/25 01:24:13.006 STEP: Collecting events from namespace "dra-9101". @ 09/22/25 01:24:13.006 STEP: Found 12 events. @ 09/22/25 01:24:13.01 I0922 01:24:13.010258 34 dump.go:53] At 2025-09-22 00:26:51 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-8pr5d I0922 01:24:13.010287 34 dump.go:53] At 2025-09-22 00:26:51 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-drd6r I0922 01:24:13.010309 34 dump.go:53] At 2025-09-22 00:26:51 +0000 UTC - event for dra-test-driver-8pr5d: {default-scheduler } Scheduled: Successfully assigned dra-9101/dra-test-driver-8pr5d to latest-worker2 I0922 01:24:13.010343 34 dump.go:53] At 2025-09-22 00:26:51 +0000 UTC - event for dra-test-driver-drd6r: {default-scheduler } Scheduled: Successfully assigned dra-9101/dra-test-driver-drd6r to latest-worker I0922 01:24:13.010370 34 dump.go:53] At 2025-09-22 00:26:52 +0000 UTC - event for dra-test-driver-8pr5d: {kubelet latest-worker2} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine I0922 01:24:13.010391 34 dump.go:53] At 2025-09-22 00:26:52 +0000 UTC - event for dra-test-driver-8pr5d: {kubelet latest-worker2} Created: Created container: pause I0922 01:24:13.010410 34 dump.go:53] At 2025-09-22 00:26:52 +0000 UTC - event for dra-test-driver-8pr5d: {kubelet latest-worker2} Started: Started container pause I0922 01:24:13.010437 34 dump.go:53] At 2025-09-22 00:26:52 +0000 UTC - event for dra-test-driver-drd6r: {kubelet latest-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine I0922 01:24:13.010459 34 dump.go:53] At 2025-09-22 00:26:52 +0000 UTC - event for dra-test-driver-drd6r: {kubelet latest-worker} Created: Created container: pause I0922 01:24:13.010478 34 dump.go:53] At 2025-09-22 00:26:52 +0000 UTC - event for dra-test-driver-drd6r: {kubelet latest-worker} Started: Started container pause I0922 01:24:13.010498 34 dump.go:53] At 2025-09-22 01:24:12 +0000 UTC - event for dra-test-driver-8pr5d: {kubelet latest-worker2} Killing: Stopping container pause I0922 01:24:13.010516 34 dump.go:53] At 2025-09-22 01:24:12 +0000 UTC - event for dra-test-driver-drd6r: {kubelet latest-worker} Killing: Stopping container pause I0922 01:24:13.014840 34 resource.go:151] POD NODE PHASE GRACE CONDITIONS I0922 01:24:13.014946 34 resource.go:158] dra-test-driver-8pr5d latest-worker2 Running 30s [{PodReadyToStartContainers 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:52 +0000 UTC } {Initialized 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:51 +0000 UTC } {Ready 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:52 +0000 UTC } {ContainersReady 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:52 +0000 UTC } {PodScheduled 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:51 +0000 UTC }] I0922 01:24:13.015014 34 resource.go:158] dra-test-driver-drd6r latest-worker Running 30s [{PodReadyToStartContainers 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:53 +0000 UTC } {Initialized 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:51 +0000 UTC } {Ready 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:53 +0000 UTC } {ContainersReady 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:53 +0000 UTC } {PodScheduled 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:26:51 +0000 UTC }] I0922 01:24:13.015029 34 resource.go:161] I0922 01:24:13.049019 34 dump.go:109] Logging node info for node latest-control-plane I0922 01:24:13.054197 34 dump.go:114] Node Info: &Node{ObjectMeta:{latest-control-plane 4a88f68e-a233-4137-96e6-a4d02b8e46e1 5988996 0 2025-08-14 10:01:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2025-08-14 10:01:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2025-08-14 10:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2025-08-14 10:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2025-09-22 01:23:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c7fda80d8f9641588679c7e1084a869f,SystemUUID:634ef398-8dbd-484b-881c-6c5a7cef3a32,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c registry.k8s.io/pause:3.10.1],SizeBytes:320448,},ContainerImage{Names:[registry.k8s.io/pause:3.10],SizeBytes:320368,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.054251 34 dump.go:116] Logging kubelet events for node latest-control-plane I0922 01:24:13.058008 34 dump.go:121] Logging pods the kubelet thinks are on node latest-control-plane I0922 01:24:13.077293 34 dump.go:128] kube-system/etcd-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.077353 34 dump.go:134] Container etcd ready: true, restart count 0 I0922 01:24:13.077390 34 dump.go:128] kube-system/kube-apiserver-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.077414 34 dump.go:134] Container kube-apiserver ready: true, restart count 0 I0922 01:24:13.077449 34 dump.go:128] kube-system/kube-controller-manager-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.077479 34 dump.go:134] Container kube-controller-manager ready: true, restart count 0 I0922 01:24:13.077513 34 dump.go:128] kube-system/kube-scheduler-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.077543 34 dump.go:134] Container kube-scheduler ready: true, restart count 0 I0922 01:24:13.077570 34 dump.go:128] kube-system/kindnet-qc9kz started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.077599 34 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.077705 34 dump.go:128] kube-system/coredns-674b8bbfcf-klpkw started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.077724 34 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.077751 34 dump.go:128] local-path-storage/local-path-provisioner-7dc846544d-kfzml started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.077767 34 dump.go:134] Container local-path-provisioner ready: true, restart count 0 I0922 01:24:13.077786 34 dump.go:128] kube-system/kube-proxy-pmhrl started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.077803 34 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.077821 34 dump.go:128] kube-system/create-loop-devs-mbrb2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.077837 34 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.077859 34 dump.go:128] kube-system/coredns-674b8bbfcf-2h8qn started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.077874 34 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.150450 34 kubelet_metrics.go:206] Latency metrics for node latest-control-plane I0922 01:24:13.150499 34 dump.go:109] Logging node info for node latest-worker I0922 01:24:13.157007 34 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker 8c88dec8-b208-4951-9edb-9daf6e60cfed 5988936 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-15 02:53:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:23:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7df88837db5436386878b8262a14e32,SystemUUID:7f723078-9e20-4c29-be13-6b3e065907c6,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:c0872aae4791ff427e6eda52769afa04f17b5cf756f8267e0d52774c99d5c9de docker.io/library/docker:dind],SizeBytes:145422022,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:9d12cad5c4a1a4c1a853947ae4cbf31a860f98bb450173fee644d8e63ce6ea4d registry.k8s.io/build-image/distroless-iptables:v0.7.7],SizeBytes:11558416,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.157056 34 dump.go:116] Logging kubelet events for node latest-worker I0922 01:24:13.160444 34 dump.go:121] Logging pods the kubelet thinks are on node latest-worker I0922 01:24:13.175744 34 dump.go:128] dra-2483/dra-test-driver-9mlzl started at 2025-09-22 00:28:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.175779 34 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.175824 34 dump.go:128] kube-system/create-loop-devs-76ng2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.175852 34 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.175890 34 dump.go:128] dra-3384/dra-test-driver-b2v6n started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.175908 34 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.175927 34 dump.go:128] dra-3118/dra-test-driver-2r9tq started at 2025-09-22 00:27:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.175945 34 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.175964 34 dump.go:128] dra-3361/dra-test-driver-vwd92 started at 2025-09-22 00:24:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.175981 34 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.175999 34 dump.go:128] dra-6036/dra-test-driver-khrc2 started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.176016 34 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.176046 34 dump.go:128] dra-1198/dra-test-driver-lvq9b started at 2025-09-22 00:24:19 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.176063 34 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.176081 34 dump.go:128] dra-9101/dra-test-driver-drd6r started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.176098 34 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.176121 34 dump.go:128] kube-system/kube-proxy-plcq4 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.176138 34 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.176153 34 dump.go:128] kube-system/kindnet-kcb5w started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.176170 34 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.820818 34 kubelet_metrics.go:206] Latency metrics for node latest-worker I0922 01:24:13.820845 34 dump.go:109] Logging node info for node latest-worker2 I0922 01:24:13.824588 34 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker2 3d1774a3-1fed-4ae4-9649-579167a82d96 5988882 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-08 02:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:53b074234c804259ae0ff2cba6a6f050,SystemUUID:0c517cb0-5dd6-4327-a0bd-07b70f744e34,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:831644212c5bdd0b3362b5855c87b980ea39a83c9e9adcef2ce03eced99a737a docker.io/library/docker:dind],SizeBytes:148417865,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.824625 34 dump.go:116] Logging kubelet events for node latest-worker2 I0922 01:24:13.827427 34 dump.go:121] Logging pods the kubelet thinks are on node latest-worker2 I0922 01:24:13.836671 34 dump.go:128] dra-6036/dra-test-driver-hj5qv started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.836703 34 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.836726 34 dump.go:128] kube-system/create-loop-devs-pw6dk started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.836743 34 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.836762 34 dump.go:128] dra-9088/dra-test-driver-lpp9v started at 2025-09-22 00:25:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.836779 34 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.836797 34 dump.go:128] kube-system/kindnet-pthkx started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.836814 34 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.836833 34 dump.go:128] dra-9101/dra-test-driver-8pr5d started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.836850 34 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.836868 34 dump.go:128] kube-system/kube-proxy-splj6 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.836883 34 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.836898 34 dump.go:128] dra-9212/dra-test-driver-stcxz started at 2025-09-22 00:29:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.836914 34 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:14.467970 34 kubelet_metrics.go:206] Latency metrics for node latest-worker2 STEP: Destroying namespace "dra-9101" for this suite. @ 09/22/25 01:24:14.468 << Timeline [TIMEDOUT] A suite timeout occurred In [BeforeEach] at: k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.938 This is the Progress Report generated when the suite timeout occurred: [sig-node] [DRA] control plane [ConformanceCandidate] with different ResourceSlices keeps pod pending because of CEL runtime errors (Spec Runtime: 57m21.454s) k8s.io/kubernetes/test/e2e/dra/dra.go:968 In [BeforeEach] (Node Runtime: 57m21.365s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 At [By Step] deploying driver dra-9101.k8s.io on nodes [latest-worker latest-worker2] (Step Runtime: 57m21.365s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:397 Spec Goroutine goroutine 7974 [select] k8s.io/dynamic-resource-allocation/resourceslice.(*Controller).initInformer(0xc006b920c0, {0x5a48ba8, 0xc0042ffe50}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:537 k8s.io/dynamic-resource-allocation/resourceslice.newController({0x5a48b70?, 0xc006b98000?}, {{0xc004e9c3d0, 0xf}, {0x5aa1d18, 0xc006b7c000}, 0xc003a14180, 0xc00498e8d8, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:419 k8s.io/dynamic-resource-allocation/resourceslice.StartController({0x5a48b70, 0xc006b98000}, {{0xc004e9c3d0, 0xf}, {0x5aa1d18, 0xc006b7c000}, 0xc003a14180, 0xc00498e8d8, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:179 k8s.io/dynamic-resource-allocation/kubeletplugin.(*Helper).PublishResources(0xc002777930, {0xc006b8da70?, 0x5a3c470?}, {0xc006b4f200?}) k8s.io/dynamic-resource-allocation/kubeletplugin/draplugin.go:773 > k8s.io/kubernetes/test/e2e/dra/test-driver/app.StartPlugin({0x5a48b70, 0xc006b8da70}, {0x51bc291, 0x4}, {0xc004e9c3d0, 0xf}, {0x5aa1d18, 0xc006b7c000}, {0xc004fc84d0, 0xd}, ...) k8s.io/kubernetes/test/e2e/dra/test-driver/app/kubeletplugin.go:223 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).SetUp(0xc000de08f0, 0xc0010034f0, 0xc00688aff0) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:615 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).Run(0xc000de08f0, 0x0?, 0x2235ee0?) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:323 > k8s.io/kubernetes/test/e2e/dra/utils.NewDriver.func1() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:292 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x5a06420?, 0x7d123a0?}) github.com/onsi/ginkgo/v2@v2.21.0/internal/node.go:472 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:894 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode in goroutine 7432 github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:881 Goroutines of Interest goroutine 8051 [chan receive, 58 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc000766c48) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc006adb200, {0x5a3c4a0, 0xc000766c48}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 7974 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 goroutine 8052 [chan receive, 58 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc000766cd8) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc006adb400, {0x5a3c4a0, 0xc000766cd8}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 7974 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 goroutine 8023 [sync.Cond.Wait, 58 minutes] sync.runtime_notifyListWait(0xc0024baac8, 0x0) runtime/sema.go:597 sync.(*Cond).Wait(0x51dbe59?) sync/cond.go:71 k8s.io/client-go/tools/cache.(*RealFIFO).Pop(0xc0024baaa0, 0xc003897340) k8s.io/client-go/tools/cache/the_real_fifo.go:207 k8s.io/client-go/tools/cache.(*controller).processLoop(0xc00661ad10, {0x5a48dd8, 0xc002807b90}) k8s.io/client-go/tools/cache/controller.go:211 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x5a48dd8?, 0xc002807b90?}, 0xc00664bad0?) k8s.io/apimachinery/pkg/util/wait/backoff.go:255 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x5a48dd8, 0xc002807b90}, 0xc004b88db8, {0x5a06ae0, 0xc00664bad0}, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:256 k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x5a48dd8, 0xc002807b90}, 0xc004b88db8, 0x3b9aca00, 0x0, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:223 k8s.io/apimachinery/pkg/util/wait.UntilWithContext(...) k8s.io/apimachinery/pkg/util/wait/backoff.go:172 k8s.io/client-go/tools/cache.(*controller).RunWithContext(0xc00661ad10, {0x5a48dd8, 0xc002807b90}) k8s.io/client-go/tools/cache/controller.go:183 k8s.io/client-go/tools/cache.(*sharedIndexInformer).RunWithContext(0xc00612e840, {0x5a48dd8, 0xc002807b90}) k8s.io/client-go/tools/cache/shared_informer.go:587 k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc0042ce4b0?, 0x10000c006a04140?) k8s.io/client-go/tools/cache/shared_informer.go:526 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init.func7() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init in goroutine 8007 k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:211 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ • [TIMEDOUT] [3377.800 seconds] [sig-node] [DRA] control plane [ConformanceCandidate] [BeforeEach] supports external claim referenced by multiple pods [sig-node, DRA, ConformanceCandidate] [BeforeEach] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [It] k8s.io/kubernetes/test/e2e/dra/dra.go:869 Timeline >> STEP: Creating a kubernetes client @ 09/22/25 00:27:56.764 I0922 00:27:56.764296 19 util.go:454] >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename dra @ 09/22/25 00:27:56.765 STEP: Waiting for a default service account to be provisioned in namespace @ 09/22/25 00:27:56.774 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace @ 09/22/25 00:27:56.778 STEP: selecting nodes @ 09/22/25 00:27:56.784 I0922 00:27:56.841896 19 deploy.go:142] testing on nodes [latest-worker] STEP: deploying driver dra-3118.k8s.io on nodes [latest-worker] @ 09/22/25 00:27:56.842 I0922 00:27:56.847875 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:27:56.848043 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:27:56.902431 19 create.go:156] creating *v1.ReplicaSet: dra-3118/dra-test-driver I0922 00:27:57.900918 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:27:57.901025 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:27:58.929110 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:27:59.635147 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:27:59.635248 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:28:00.178789 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:28:02.108908 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:28:05.291080 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:28:05.291185 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:28:07.381537 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:28:13.251912 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:28:13.252015 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:28:19.226627 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:28:32.329784 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:28:32.329885 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:28:40.391374 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:28:58.919523 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:28:58.919612 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:29:25.145259 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:29:56.435425 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:29:56.435517 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:30:09.114818 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:30:51.783333 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:30:51.783441 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:31:05.424521 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:31:23.271004 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:31:23.271127 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:32:01.620006 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:32:14.005331 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:32:14.005432 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:32:37.320094 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:33:09.797271 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:33:09.797382 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:33:20.184366 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:33:40.889079 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:33:40.889179 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:34:10.896522 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:34:12.377630 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:34:12.377735 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:35:01.067053 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:35:04.262693 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:35:04.262793 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:35:42.238766 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:35:59.742679 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:35:59.742793 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:36:38.677383 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:36:38.677493 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:36:39.414878 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:37:29.510754 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:37:29.510871 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:37:37.479614 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:38:08.063664 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:38:20.554561 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:38:20.554664 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:38:47.010880 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:38:53.274461 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:38:53.274560 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:39:40.362090 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:39:52.995446 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:39:52.995537 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:40:24.723845 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:40:29.724500 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:40:29.724604 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:41:00.872155 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:41:00.872266 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:41:14.492655 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:41:59.507781 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:41:59.507918 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:42:13.066975 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:42:52.261391 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:42:52.261509 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:42:53.338419 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:43:34.741589 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:43:50.126100 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:43:50.126195 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:44:23.126157 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:44:33.837979 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:44:33.838075 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:45:05.764344 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:45:28.115207 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:45:28.115334 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:45:58.270485 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:46:21.347643 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:46:21.347748 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:46:42.011276 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:47:10.228512 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:47:10.228614 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:47:24.873735 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 00:47:57.249327 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:48:00.307733 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:48:00.307871 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:48:45.967633 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:48:57.071299 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:48:57.071404 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:49:24.486941 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:49:52.711384 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:49:52.711722 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:50:03.337162 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:50:51.119845 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:50:51.119944 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:50:54.349213 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:51:28.237962 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:51:28.238091 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:51:52.463235 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:52:16.610468 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:52:16.610544 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:52:46.331384 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:53:14.977608 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:53:14.977725 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:53:21.987048 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:53:46.200514 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:53:46.200651 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:54:02.602157 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:54:24.697914 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:54:24.698009 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:54:43.760028 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:55:13.599664 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:55:13.599783 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:55:18.455199 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:55:46.651375 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:55:46.651484 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:56:03.289188 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:56:31.588427 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:56:31.588534 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:56:41.777612 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:57:06.384348 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:57:06.384455 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:57:29.538949 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:57:51.521373 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:57:51.521472 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:58:26.217905 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:58:34.144596 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:58:34.144690 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 00:59:11.939391 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:59:11.939468 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 00:59:19.423421 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 00:59:53.274795 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 00:59:53.274902 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:00:12.412785 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:00:44.618174 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:00:44.618275 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:00:45.239183 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:01:15.699833 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:01:15.699982 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:01:21.860518 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:02:00.385554 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:02:00.879052 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:02:00.879141 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:02:35.179698 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:02:35.179835 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:02:43.827931 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:03:08.041517 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:03:08.041623 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:03:27.440663 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:03:48.566829 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:03:48.566937 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:04:11.265092 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:04:38.465552 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:04:38.465650 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:05:06.284136 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:05:12.743309 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:05:12.743419 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:05:49.440595 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:06:04.780799 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:06:04.780934 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:06:20.428566 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:07:01.924915 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:07:01.925009 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:07:16.298195 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:07:51.250580 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:08:01.302687 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:08:01.302784 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:08:21.615720 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:08:58.473394 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:08:58.473506 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:09:12.625336 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:09:51.249466 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:09:55.509240 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:09:55.509351 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:10:22.261886 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:10:50.514375 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:10:50.514476 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:11:13.079083 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:11:38.136919 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:11:38.137061 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:11:57.690234 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:12:24.388143 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:12:24.388242 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:12:27.865460 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:13:09.413529 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:13:09.413628 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:13:16.776023 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:14:02.464526 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:14:04.248825 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:14:04.248928 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:14:41.976430 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:14:58.391032 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:14:58.391111 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:15:19.766219 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:15:48.788840 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:15:48.788935 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:15:55.240272 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:16:23.289796 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:16:23.289911 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:16:27.149129 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" E0922 01:16:58.880170 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:17:22.075871 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:17:22.075995 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:17:54.947943 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:18:05.092152 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:18:05.092260 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" I0922 01:18:41.261931 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:18:41.262036 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:18:53.371605 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:19:15.257960 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:19:15.258080 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:19:31.574537 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:20:09.420103 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:20:09.420208 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:20:21.950282 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:20:50.960842 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:20:50.960942 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:21:01.966023 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:21:28.600586 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:21:28.600704 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:21:49.532829 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:22:24.213816 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:22:24.214284 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:22:47.338689 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" I0922 01:23:17.488378 19 deploy.go:156] "Listing ResourceClaims failed" logger="ResourceClaimListWatch" resourceAPI="V1" err="the server could not find the requested resource (get resourceclaims.resource.k8s.io)" E0922 01:23:17.488505 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: the server could not find the requested resource (get resourceclaims.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213" type="*v1.ResourceClaim" E0922 01:23:41.035198 19 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: the server could not find the requested resource (get resourceslices.resource.k8s.io)" logger="UnhandledError" reflector="k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:534" type="*v1.ResourceSlice" [TIMEDOUT] in [BeforeEach] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.763 I0922 01:24:12.806972 19 deploy.go:423] Unexpected error: delete ResourceSlices of the driver: <*errors.StatusError | 0xc00688fcc0>: the server could not find the requested resource (delete resourceslices.resource.k8s.io) { ErrStatus: code: 404 details: causes: - message: 404 page not found reason: UnexpectedServerResponse group: resource.k8s.io kind: resourceslices message: the server could not find the requested resource (delete resourceslices.resource.k8s.io) metadata: {} reason: NotFound status: Failure, } [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:423 @ 09/22/25 01:24:12.807 STEP: Waiting for ResourceSlices of driver dra-3118.k8s.io to be removed... @ 09/22/25 01:24:12.807 [FAILED] in [DeferCleanup (Each)] - k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:992 @ 09/22/25 01:24:12.813 I0922 01:24:12.815072 19 helper.go:125] Waiting up to 7m0s for all (but 0) nodes to be ready STEP: dump namespace information after failure @ 09/22/25 01:24:12.905 STEP: Collecting events from namespace "dra-3118". @ 09/22/25 01:24:12.905 STEP: Found 6 events. @ 09/22/25 01:24:12.909 I0922 01:24:12.909114 19 dump.go:53] At 2025-09-22 00:27:56 +0000 UTC - event for dra-test-driver: {replicaset-controller } SuccessfulCreate: Created pod: dra-test-driver-2r9tq I0922 01:24:12.909132 19 dump.go:53] At 2025-09-22 00:27:56 +0000 UTC - event for dra-test-driver-2r9tq: {default-scheduler } Scheduled: Successfully assigned dra-3118/dra-test-driver-2r9tq to latest-worker I0922 01:24:12.909152 19 dump.go:53] At 2025-09-22 00:27:57 +0000 UTC - event for dra-test-driver-2r9tq: {kubelet latest-worker} Pulled: Container image "registry.k8s.io/sig-storage/hostpathplugin:v1.7.3" already present on machine I0922 01:24:12.909164 19 dump.go:53] At 2025-09-22 00:27:57 +0000 UTC - event for dra-test-driver-2r9tq: {kubelet latest-worker} Created: Created container: pause I0922 01:24:12.909175 19 dump.go:53] At 2025-09-22 00:27:57 +0000 UTC - event for dra-test-driver-2r9tq: {kubelet latest-worker} Started: Started container pause I0922 01:24:12.909197 19 dump.go:53] At 2025-09-22 01:24:12 +0000 UTC - event for dra-test-driver-2r9tq: {kubelet latest-worker} Killing: Stopping container pause I0922 01:24:12.912646 19 resource.go:151] POD NODE PHASE GRACE CONDITIONS I0922 01:24:12.912710 19 resource.go:158] dra-test-driver-2r9tq latest-worker Running 30s [{PodReadyToStartContainers 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:27:58 +0000 UTC } {Initialized 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:27:56 +0000 UTC } {Ready 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:27:58 +0000 UTC } {ContainersReady 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:27:58 +0000 UTC } {PodScheduled 0 True 0001-01-01 00:00:00 +0000 UTC 2025-09-22 00:27:56 +0000 UTC }] I0922 01:24:12.912722 19 resource.go:161] I0922 01:24:13.006100 19 dump.go:109] Logging node info for node latest-control-plane I0922 01:24:13.010536 19 dump.go:114] Node Info: &Node{ObjectMeta:{latest-control-plane 4a88f68e-a233-4137-96e6-a4d02b8e46e1 5988996 0 2025-08-14 10:01:12 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2025-08-14 10:01:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2025-08-14 10:01:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2025-08-14 10:01:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2025-09-22 01:23:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:49 +0000 UTC,LastTransitionTime:2025-08-14 10:01:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c7fda80d8f9641588679c7e1084a869f,SystemUUID:634ef398-8dbd-484b-881c-6c5a7cef3a32,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c registry.k8s.io/pause:3.10.1],SizeBytes:320448,},ContainerImage{Names:[registry.k8s.io/pause:3.10],SizeBytes:320368,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.010578 19 dump.go:116] Logging kubelet events for node latest-control-plane I0922 01:24:13.014261 19 dump.go:121] Logging pods the kubelet thinks are on node latest-control-plane I0922 01:24:13.033522 19 dump.go:128] kube-system/etcd-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033559 19 dump.go:134] Container etcd ready: true, restart count 0 I0922 01:24:13.033598 19 dump.go:128] kube-system/kube-apiserver-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033637 19 dump.go:134] Container kube-apiserver ready: true, restart count 0 I0922 01:24:13.033657 19 dump.go:128] kube-system/kube-controller-manager-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033675 19 dump.go:134] Container kube-controller-manager ready: true, restart count 0 I0922 01:24:13.033694 19 dump.go:128] kube-system/kube-scheduler-latest-control-plane started at 2025-08-14 10:01:15 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033712 19 dump.go:134] Container kube-scheduler ready: true, restart count 0 I0922 01:24:13.033734 19 dump.go:128] kube-system/kindnet-qc9kz started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033751 19 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.033771 19 dump.go:128] kube-system/coredns-674b8bbfcf-klpkw started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033803 19 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.033822 19 dump.go:128] local-path-storage/local-path-provisioner-7dc846544d-kfzml started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033839 19 dump.go:134] Container local-path-provisioner ready: true, restart count 0 I0922 01:24:13.033856 19 dump.go:128] kube-system/kube-proxy-pmhrl started at 2025-08-14 10:01:20 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033878 19 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.033894 19 dump.go:128] kube-system/create-loop-devs-mbrb2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033911 19 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.033930 19 dump.go:128] kube-system/coredns-674b8bbfcf-2h8qn started at 2025-08-14 10:01:35 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.033949 19 dump.go:134] Container coredns ready: true, restart count 0 I0922 01:24:13.106446 19 kubelet_metrics.go:206] Latency metrics for node latest-control-plane I0922 01:24:13.106489 19 dump.go:109] Logging node info for node latest-worker I0922 01:24:13.111519 19 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker 8c88dec8-b208-4951-9edb-9daf6e60cfed 5988936 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-15 02:53:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:23:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:23:10 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7df88837db5436386878b8262a14e32,SystemUUID:7f723078-9e20-4c29-be13-6b3e065907c6,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:c0872aae4791ff427e6eda52769afa04f17b5cf756f8267e0d52774c99d5c9de docker.io/library/docker:dind],SizeBytes:145422022,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:9d12cad5c4a1a4c1a853947ae4cbf31a860f98bb450173fee644d8e63ce6ea4d registry.k8s.io/build-image/distroless-iptables:v0.7.7],SizeBytes:11558416,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20241212-8ac705d0],SizeBytes:3084671,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.111570 19 dump.go:116] Logging kubelet events for node latest-worker I0922 01:24:13.114676 19 dump.go:121] Logging pods the kubelet thinks are on node latest-worker I0922 01:24:13.130438 19 dump.go:128] dra-3118/dra-test-driver-2r9tq started at 2025-09-22 00:27:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.130475 19 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.130497 19 dump.go:128] dra-3361/dra-test-driver-vwd92 started at 2025-09-22 00:24:56 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.130515 19 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.130536 19 dump.go:128] dra-6036/dra-test-driver-khrc2 started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.130554 19 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.130573 19 dump.go:128] dra-1198/dra-test-driver-lvq9b started at 2025-09-22 00:24:19 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.130590 19 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.130609 19 dump.go:128] dra-9101/dra-test-driver-drd6r started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.130625 19 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.130662 19 dump.go:128] kube-system/kube-proxy-plcq4 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.130684 19 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.130700 19 dump.go:128] kube-system/kindnet-kcb5w started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.130720 19 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.130739 19 dump.go:128] dra-2483/dra-test-driver-9mlzl started at 2025-09-22 00:28:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.130757 19 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.130776 19 dump.go:128] kube-system/create-loop-devs-76ng2 started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.130793 19 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.130813 19 dump.go:128] dra-3384/dra-test-driver-b2v6n started at 2025-09-22 00:24:13 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.130829 19 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.893709 19 kubelet_metrics.go:206] Latency metrics for node latest-worker I0922 01:24:13.893772 19 dump.go:109] Logging node info for node latest-worker2 I0922 01:24:13.898663 19 dump.go:114] Node Info: &Node{ObjectMeta:{latest-worker2 3d1774a3-1fed-4ae4-9649-579167a82d96 5988882 0 2025-08-14 10:01:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:latest-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2025-08-14 10:01:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2025-09-08 02:54:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2025-09-22 01:22:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:runtimeHandlers":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67398062080 0} {} 65818420Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2025-09-22 01:22:36 +0000 UTC,LastTransitionTime:2025-08-14 10:01:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:53b074234c804259ae0ff2cba6a6f050,SystemUUID:0c517cb0-5dd6-4327-a0bd-07b70f744e34,BootID:505e7ca9-c9d0-4e24-a4f5-b43989aac371,KernelVersion:5.15.0-138-generic,OSImage:Debian GNU/Linux 12 (bookworm),ContainerRuntimeVersion:containerd://2.1.1,KubeletVersion:v1.33.1,KubeProxyVersion:,OperatingSystem:linux,Architecture:amd64,Swap:&NodeSwapStatus{Capacity:*1023406080,},},Images:[]ContainerImage{ContainerImage{Names:[docker.io/lfncnti/cluster-tools@sha256:906f9b2c15b6f64d83c52bf9efd108b434d311fb548ac20abcfb815981e741b6 docker.io/lfncnti/cluster-tools:v1.0.8],SizeBytes:466650686,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/go-runner@sha256:4664e5d3eba9f9f805900045fc632dd4f35d96b9a744800c0eade4fde45de681 litmuschaos.docker.scarf.sh/litmuschaos/go-runner:3.6.0],SizeBytes:190137665,},ContainerImage{Names:[docker.io/library/docker@sha256:831644212c5bdd0b3362b5855c87b980ea39a83c9e9adcef2ce03eced99a737a docker.io/library/docker:dind],SizeBytes:148417865,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.33.1 registry.k8s.io/kube-apiserver:v1.33.1],SizeBytes:102853105,},ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.33.1 registry.k8s.io/kube-proxy:v1.33.1],SizeBytes:99143369,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.33.1 registry.k8s.io/kube-controller-manager:v1.33.1],SizeBytes:95652183,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8],SizeBytes:91036984,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.33.1 registry.k8s.io/kube-scheduler:v1.33.1],SizeBytes:74496343,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 registry.k8s.io/etcd:3.6.4-0],SizeBytes:74311308,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d079de1fa7f757b367c39ee12eafc0e8729e94962df2c29808f4b602a615c142 docker.io/aquasec/kube-bench:latest],SizeBytes:62003820,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:e68c8040406b92b9df6566bd140523da57e12d5bf0a193ba985526050ddb4e87],SizeBytes:60365666,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.21-0],SizeBytes:58938593,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:352a050380078cb2a1c246357a0dfa2fcf243ee416b92ff28b44a01d1b4b0294 registry.k8s.io/e2e-test-images/agnhost:2.56],SizeBytes:55898458,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:60c11972feffb389c42762ab7ba596dcbfe88aed032b9c4a9cb9adedaaf8a35c docker.io/openpolicyagent/gatekeeper:v3.20.0],SizeBytes:46856023,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper@sha256:6fda2898bdbdb9d446ce71b1f6bfe0efad87076025989642b5d151c17fd9a77c docker.io/openpolicyagent/gatekeeper:v3.20.1],SizeBytes:44884438,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20250512-df8de77b],SizeBytes:44375501,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:19d4ecf1e0731b9ea55aca9c070d520f68b96ed0defbcc0e4eefe97b3d663ca3 registry.k8s.io/e2e-test-images/sample-apiserver:1.29.2],SizeBytes:39672997,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:885520da54150bd9d133b30d59df645fdc5a06644809d987e1d3c707c3dd2aaf docker.io/aquasec/kube-hunter:latest],SizeBytes:38278209,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:2a0b297cc7c4cd376ac7413df339ff2fdaa1ec9d099aed92b5ea1f031ef7f639 registry.k8s.io/sig-storage/csi-resizer:v1.13.1],SizeBytes:32400950,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:a399393ff5bd156277c56bae0c08389b1a1b95b7fd6ea44a316ce55e0dd559d7 registry.k8s.io/sig-storage/csi-attacher:v4.8.0],SizeBytes:32231444,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:672e45d6a55678abc1d102de665b5cbd63848e75dc7896f238c8eaaf3c7d322f registry.k8s.io/sig-storage/csi-provisioner:v5.1.0],SizeBytes:32167411,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator@sha256:f37e85afc0412a47d1e028e5f86b6cc3a8cbbfe59369beba0af5401aa47dfd63 litmuschaos.docker.scarf.sh/litmuschaos/chaos-operator:3.6.0],SizeBytes:29032757,},ContainerImage{Names:[litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner@sha256:a762f650ce60ab0698c79e4b646c941933fab46837c69a0a888addb6bda0ccb6 litmuschaos.docker.scarf.sh/litmuschaos/chaos-runner:3.6.0],SizeBytes:27151116,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20250214-acbabc1a],SizeBytes:22540870,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.12.0],SizeBytes:20939036,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:65192845eeac8e24a708f865911776a0d8490ab886e78dad17e80ae1870e7410 registry.k8s.io/sig-storage/hostpathplugin:v1.16.1],SizeBytes:20349620,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:011fadc314224fa0dccb59b3770ae3f355c31ae512bbcbb3ca4a362c7d616622 docker.io/openpolicyagent/gatekeeper-crds:v3.20.1],SizeBytes:18120390,},ContainerImage{Names:[docker.io/openpolicyagent/gatekeeper-crds@sha256:d60ccbbfb53422fd0af71486ed5171343f741e9e7ef9fa4dea86719e684d5e54 docker.io/openpolicyagent/gatekeeper-crds:v3.20.0],SizeBytes:17993290,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:f6602cc2b5a2ff2138db48546d727e72599cd14021cdc5309142f407f5d43969 quay.io/coreos/etcd:v3.2.32],SizeBytes:16250306,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 registry.k8s.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:d7138bcc3aa5f267403d45ad4292c95397e421ea17a0035888850f424c7de25d registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.13.0],SizeBytes:14781503,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 docker.io/coredns/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/curlimages/curl@sha256:3dfa70a646c5d03ddf0e7c0ff518a5661e95b8bcbc82079f0fb7453a96eaae35 docker.io/curlimages/curl:8.12.0],SizeBytes:10048754,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,RuntimeHandlers:[]NodeRuntimeHandler{NodeRuntimeHandler{Name:test-handler,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:runc,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},NodeRuntimeHandler{Name:,Features:&NodeRuntimeHandlerFeatures{RecursiveReadOnlyMounts:*true,UserNamespaces:*true,},},},Features:&NodeFeatures{SupplementalGroupsPolicy:*true,},},} I0922 01:24:13.898704 19 dump.go:116] Logging kubelet events for node latest-worker2 I0922 01:24:13.902230 19 dump.go:121] Logging pods the kubelet thinks are on node latest-worker2 I0922 01:24:13.913761 19 dump.go:128] kube-system/kindnet-pthkx started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.913790 19 dump.go:134] Container kindnet-cni ready: true, restart count 0 I0922 01:24:13.913817 19 dump.go:128] dra-9101/dra-test-driver-8pr5d started at 2025-09-22 00:26:51 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.913836 19 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.913848 19 dump.go:128] kube-system/kube-proxy-splj6 started at 2025-08-14 10:01:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.913860 19 dump.go:134] Container kube-proxy ready: true, restart count 0 I0922 01:24:13.913872 19 dump.go:128] dra-9212/dra-test-driver-stcxz started at 2025-09-22 00:29:24 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.913882 19 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.913895 19 dump.go:128] dra-6036/dra-test-driver-hj5qv started at 2025-09-22 00:26:02 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.913906 19 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:13.913916 19 dump.go:128] kube-system/create-loop-devs-pw6dk started at 2025-08-14 10:01:25 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.913928 19 dump.go:134] Container loopdev ready: true, restart count 0 I0922 01:24:13.913942 19 dump.go:128] dra-9088/dra-test-driver-lpp9v started at 2025-09-22 00:25:36 +0000 UTC (0+1 container statuses recorded) I0922 01:24:13.913972 19 dump.go:134] Container pause ready: true, restart count 0 I0922 01:24:14.557589 19 kubelet_metrics.go:206] Latency metrics for node latest-worker2 STEP: Destroying namespace "dra-3118" for this suite. @ 09/22/25 01:24:14.558 << Timeline [TIMEDOUT] A suite timeout occurred In [BeforeEach] at: k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 @ 09/22/25 01:24:12.763 This is the Progress Report generated when the suite timeout occurred: [sig-node] [DRA] control plane [ConformanceCandidate] supports external claim referenced by multiple pods (Spec Runtime: 56m16s) k8s.io/kubernetes/test/e2e/dra/dra.go:869 In [BeforeEach] (Node Runtime: 56m15.922s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 At [By Step] deploying driver dra-3118.k8s.io on nodes [latest-worker] (Step Runtime: 56m15.921s) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:397 Spec Goroutine goroutine 9035 [select] k8s.io/dynamic-resource-allocation/resourceslice.(*Controller).initInformer(0xc0064b6900, {0x5a48ba8, 0xc000b96000}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:537 k8s.io/dynamic-resource-allocation/resourceslice.newController({0x5a48b70?, 0xc006be4e70?}, {{0xc002c27bf0, 0xf}, {0x5aa1d18, 0xc0059d9500}, 0xc000bd4780, 0xc0004b3d90, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:419 k8s.io/dynamic-resource-allocation/resourceslice.StartController({0x5a48b70, 0xc006be4e70}, {{0xc002c27bf0, 0xf}, {0x5aa1d18, 0xc0059d9500}, 0xc000bd4780, 0xc0004b3d90, {0x0, 0x0}, ...}) k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go:179 k8s.io/dynamic-resource-allocation/kubeletplugin.(*Helper).PublishResources(0xc0063f5790, {0xc006be4960?, 0x5a3c470?}, {0xc006bd4480?}) k8s.io/dynamic-resource-allocation/kubeletplugin/draplugin.go:773 > k8s.io/kubernetes/test/e2e/dra/test-driver/app.StartPlugin({0x5a48b70, 0xc006be4960}, {0x51bc291, 0x4}, {0xc002c27bf0, 0xf}, {0x5aa1d18, 0xc0059d9500}, {0xc002deda10, 0xd}, ...) k8s.io/kubernetes/test/e2e/dra/test-driver/app/kubeletplugin.go:223 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).SetUp(0xc0059a4dc0, 0xc005b41130, 0xc00696f200) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:615 > k8s.io/kubernetes/test/e2e/dra/utils.(*Driver).Run(0xc0059a4dc0, 0xc00694c000?, 0xc001765d50?) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:323 > k8s.io/kubernetes/test/e2e/dra/utils.NewDriver.func1() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:292 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x5a48a88?, 0xc00687f8f0?}) github.com/onsi/ginkgo/v2@v2.21.0/internal/node.go:472 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:894 github.com/onsi/ginkgo/v2/internal.(*Suite).runNode in goroutine 7499 github.com/onsi/ginkgo/v2@v2.21.0/internal/suite.go:881 Goroutines of Interest goroutine 8957 [sync.Cond.Wait, 57 minutes] sync.runtime_notifyListWait(0xc006b9a3e8, 0x0) runtime/sema.go:597 sync.(*Cond).Wait(0x51dbe59?) sync/cond.go:71 k8s.io/client-go/tools/cache.(*RealFIFO).Pop(0xc006b9a3c0, 0xc0061f89b0) k8s.io/client-go/tools/cache/the_real_fifo.go:207 k8s.io/client-go/tools/cache.(*controller).processLoop(0xc0035233f0, {0x5a48dd8, 0xc006b9e1c0}) k8s.io/client-go/tools/cache/controller.go:211 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x5a48dd8?, 0xc006b9e1c0?}, 0xc006b93680?) k8s.io/apimachinery/pkg/util/wait/backoff.go:255 k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x5a48dd8, 0xc006b9e1c0}, 0xc000185db8, {0x5a06ae0, 0xc006b93680}, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:256 k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext({0x5a48dd8, 0xc006b9e1c0}, 0xc000185db8, 0x3b9aca00, 0x0, 0x1) k8s.io/apimachinery/pkg/util/wait/backoff.go:223 k8s.io/apimachinery/pkg/util/wait.UntilWithContext(...) k8s.io/apimachinery/pkg/util/wait/backoff.go:172 k8s.io/client-go/tools/cache.(*controller).RunWithContext(0xc0035233f0, {0x5a48dd8, 0xc006b9e1c0}) k8s.io/client-go/tools/cache/controller.go:183 k8s.io/client-go/tools/cache.(*sharedIndexInformer).RunWithContext(0xc0068e8dc0, {0x5a48dd8, 0xc006b9e1c0}) k8s.io/client-go/tools/cache/shared_informer.go:587 k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc000aa8500?, 0x10000c0069a8d90?) k8s.io/client-go/tools/cache/shared_informer.go:526 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init.func7() k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:213 > k8s.io/kubernetes/test/e2e/dra/utils.(*Nodes).init in goroutine 9011 k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:211 goroutine 9025 [chan receive, 57 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc0010015a8) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc005b03a00, {0x5a3c4a0, 0xc0010015a8}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 9035 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 goroutine 9058 [chan receive, 57 minutes] > k8s.io/kubernetes/test/e2e/dra/utils.(*nullListener).Accept(0xc001001620) k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:923 google.golang.org/grpc.(*Server).Serve(0xc005b03c00, {0x5a3c4a0, 0xc001001620}) google.golang.org/grpc@v1.72.1/server.go:890 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer.func1() k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:82 k8s.io/dynamic-resource-allocation/kubeletplugin.startGRPCServer in goroutine 9035 k8s.io/dynamic-resource-allocation/kubeletplugin/nonblockinggrpcserver.go:80 There were additional failures detected. To view them in detail run ginkgo -vv ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS Summarizing 15 Failures: [TIMEDOUT] [sig-node] [DRA] control plane [ConformanceCandidate] [BeforeEach] supports external claim referenced by multiple containers of multiple pods [sig-node, DRA, ConformanceCandidate] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [TIMEDOUT] [sig-node] [DRA] control plane [ConformanceCandidate] [BeforeEach] runs a pod without a generated resource claim [sig-node, DRA, ConformanceCandidate] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [TIMEDOUT] [sig-node] [DRA] ResourceSlice Controller [It] creates slices [ConformanceCandidate] [sig-node, DRA, ConformanceCandidate] k8s.io/kubernetes/test/e2e/dra/dra.go:2084 [TIMEDOUT] [sig-node] [DRA] control plane [ConformanceCandidate] [BeforeEach] supports simple pod referencing inline resource claim [sig-node, DRA, ConformanceCandidate] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [FAIL] [sig-node] Pod InPlace Resize Container [FeatureGate:InPlacePodVerticalScaling] [Beta] [It] Burstable QoS pod with memory requests + limits - decrease memory limit [sig-node, FeatureGate:InPlacePodVerticalScaling, Beta] k8s.io/kubernetes/test/e2e/common/node/pod_resize.go:1079 [TIMEDOUT] [sig-node] [DRA] control plane [ConformanceCandidate] with different ResourceSlices [BeforeEach] keeps pod pending because of CEL runtime errors [sig-node, DRA, ConformanceCandidate] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [FAIL] [sig-node] Pods Extended (pod generation) [Feature:PodObservedGenerationTracking] [FeatureGate:PodObservedGenerationTracking] [Beta] Pod Generation [It] pod rejected by kubelet should have updated generation and observedGeneration [sig-node, Feature:PodObservedGenerationTracking, FeatureGate:PodObservedGenerationTracking, Beta] k8s.io/kubernetes/test/e2e/node/pods.go:648 [TIMEDOUT] [sig-node] [DRA] control plane [ConformanceCandidate] [BeforeEach] supports external claim referenced by multiple pods [sig-node, DRA, ConformanceCandidate] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [TIMEDOUT] [sig-node] [DRA] control plane [ConformanceCandidate] [BeforeEach] supports reusing resources [sig-node, DRA, ConformanceCandidate] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [FAIL] [sig-node] Pod InPlace Resize Container [FeatureGate:InPlacePodVerticalScaling] [Beta] [It] decrease memory limit below usage [sig-node, FeatureGate:InPlacePodVerticalScaling, Beta] k8s.io/kubernetes/test/e2e/common/node/pod_resize.go:1316 [FAIL] [sig-node] Downward API [Feature:PodLevelResources] [FeatureGate:PodLevelResources] [Beta] Downward API tests for pod level resources [It] should provide default limits.cpu/memory from pod level resources or node allocatable [sig-node, Feature:PodLevelResources, FeatureGate:PodLevelResources, Beta] k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:572 [TIMEDOUT] [sig-node] [DRA] control plane [ConformanceCandidate] [BeforeEach] retries pod scheduling after creating device class [sig-node, DRA, ConformanceCandidate] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [TIMEDOUT] [sig-node] [DRA] control plane [ConformanceCandidate] with node-local resources [BeforeEach] uses all resources [sig-node, DRA, ConformanceCandidate] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 [FAIL] [sig-node] [DRA] control plane [BeforeEach] supports count/resourceclaims.resource.k8s.io ResourceQuota [ConformanceCandidate] [sig-node, DRA, ConformanceCandidate] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:449 [TIMEDOUT] [sig-node] [DRA] control plane [ConformanceCandidate] [BeforeEach] supports inline claim referenced by multiple containers [sig-node, DRA, ConformanceCandidate] k8s.io/kubernetes/test/e2e/dra/utils/deploy.go:287 Ran 67 of 7132 Specs in 3602.276 seconds FAIL! - Suite Timeout Elapsed -- 52 Passed | 15 Failed | 0 Pending | 7065 Skipped Ginkgo ran 1 suite in 1h0m4.291612962s Test Suite Failed