2025-09-22 02:47:14,178 - xtesting.ci.run_tests - INFO - Deployment description: +-------------------------+------------------------------------------------------------+ | ENV VAR | VALUE | +-------------------------+------------------------------------------------------------+ | CI_LOOP | daily | | DEBUG | false | | DEPLOY_SCENARIO | k8-nosdn-nofeature-noha | | INSTALLER_TYPE | unknown | | BUILD_TAG | 05VUTVCDGOZI | | NODE_NAME | latest | | TEST_DB_URL | http://testresults.opnfv.org/test/api/v1/results | | TEST_DB_EXT_URL | http://testresults.opnfv.org/test/api/v1/results | | S3_ENDPOINT_URL | https://storage.googleapis.com | | S3_DST_URL | s3://artifacts.opnfv.org/functest- | | | kubernetes/05VUTVCDGOZI/functest-kubernetes-opnfv- | | | functest-kubernetes-cnf-latest-cnf_testsuite- | | | run-20 | | HTTP_DST_URL | http://artifacts.opnfv.org/functest- | | | kubernetes/05VUTVCDGOZI/functest-kubernetes-opnfv- | | | functest-kubernetes-cnf-latest-cnf_testsuite- | | | run-20 | +-------------------------+------------------------------------------------------------+ 2025-09-22 02:47:14,188 - xtesting.ci.run_tests - INFO - Loading test case 'cnf_testsuite'... 2025-09-22 02:47:14,523 - xtesting.ci.run_tests - INFO - Running test case 'cnf_testsuite'... 2025-09-22 02:47:25,095 - functest_kubernetes.cnf_conformance.conformance - INFO - cnf-testsuite setup -l debug CNF TestSuite version: v1.4.5-beta2 Successfully created directories for cnf-testsuite [2025-09-22 02:47:14] INFO -- CNTI: VERSION: v1.4.5-beta2 [2025-09-22 02:47:14] INFO -- CNTI-Setup.cnf_directory_setup: Creating directories for CNTI testsuite [2025-09-22 02:47:14] DEBUG -- CNTI: helm_local_install [2025-09-22 02:47:14] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-09-22 02:47:14] INFO -- CNTI: Globally installed helm satisfies required version. Skipping local helm install. Global helm found. Version: v3.17.0 [2025-09-22 02:47:14] DEBUG -- CNTI: helm_v2?: [2025-09-22 02:47:14] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-09-22 02:47:14] DEBUG -- CNTI-Helm.helm_local_response.cmd: command: /home/xtesting/.cnf-testsuite/tools/helm/linux-amd64/helm version No Local helm version found [2025-09-22 02:47:14] WARN -- CNTI-Helm.helm_local_response.cmd: stderr: sh: line 0: /home/xtesting/.cnf-testsuite/tools/helm/linux-amd64/helm: not found [2025-09-22 02:47:14] DEBUG -- CNTI: helm_v2?: [2025-09-22 02:47:14] DEBUG -- CNTI: helm_v3?: [2025-09-22 02:47:14] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-09-22 02:47:14] DEBUG -- CNTI-Helm.helm_gives_k8s_warning?.cmd: command: helm list Global kubectl found. Version: 1.35+ Global kubectl client is more than 1 minor version ahead/behind server version No Local kubectl version found Global git found. Version: 2.47.3 No Local git version found All prerequisites found. KUBECONFIG is set as /home/xtesting/.kube/config. [2025-09-22 02:47:14] INFO -- CNTI-Setup.create_namespace: Creating namespace for CNTI testsuite [2025-09-22 02:47:14] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-09-22 02:47:15] INFO -- CNTI-KubectlClient.Apply.namespace: Apply namespace: cnf-testsuite [2025-09-22 02:47:15] INFO -- CNTI-KubectlClient.Utils.label: Label namespace/cnf-testsuite with pod-security.kubernetes.io/enforce=privileged [2025-09-22 02:47:15] INFO -- CNTI-Setup.configuration_file_setup: Creating configuration file [2025-09-22 02:47:15] INFO -- CNTI-Setup.install_apisnoop: Installing APISnoop tool [2025-09-22 02:47:15] INFO -- CNTI: GitClient.clone command: https://github.com/cncf/apisnoop /home/xtesting/.cnf-testsuite/tools/apisnoop [2025-09-22 02:47:22] INFO -- CNTI: GitClient.clone output: [2025-09-22 02:47:22] INFO -- CNTI: GitClient.clone stderr: Cloning into '/home/xtesting/.cnf-testsuite/tools/apisnoop'... [2025-09-22 02:47:22] INFO -- CNTI: url: https://github.com/vmware-tanzu/sonobuoy/releases/download/v0.56.14/sonobuoy_0.56.14_linux_amd64.tar.gz [2025-09-22 02:47:22] INFO -- CNTI: write_file: /home/xtesting/.cnf-testsuite/tools/sonobuoy/sonobuoy.tar.gz [2025-09-22 02:47:22] DEBUG -- CNTI-http.client: Performing request [2025-09-22 02:47:23] DEBUG -- CNTI-http.client: Performing request [2025-09-22 02:47:24] DEBUG -- CNTI: Sonobuoy Version: v0.56.14 MinimumKubeVersion: 1.17.0 MaximumKubeVersion: 1.99.99 GitSHA: bd5465d6b2b2b92b517f4c6074008d22338ff509 GoVersion: go1.19.4 Platform: linux/amd64 API Version check skipped due to missing `--kubeconfig` or other error [2025-09-22 02:47:24] INFO -- CNTI: install_kind [2025-09-22 02:47:24] INFO -- CNTI: write_file: /home/xtesting/.cnf-testsuite/tools/kind/kind [2025-09-22 02:47:24] INFO -- CNTI: install kind [2025-09-22 02:47:24] INFO -- CNTI: url: https://github.com/kubernetes-sigs/kind/releases/download/v0.27.0/kind-linux-amd64 [2025-09-22 02:47:24] DEBUG -- CNTI-http.client: Performing request [2025-09-22 02:47:24] DEBUG -- CNTI-http.client: Performing request Dependency installation complete 2025-09-22 02:47:45,510 - functest_kubernetes.cnf_conformance.conformance - INFO - cnf-testsuite cnf_install cnf-config=example-cnfs/coredns/cnf-testsuite.yml -l debug Successfully created directories for cnf-testsuite [2025-09-22 02:47:25] INFO -- CNTI-Setup.cnf_directory_setup: Creating directories for CNTI testsuite [2025-09-22 02:47:25] DEBUG -- CNTI: helm_local_install KUBECONFIG is set as /home/xtesting/.kube/config. [2025-09-22 02:47:25] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-09-22 02:47:25] INFO -- CNTI: Globally installed helm satisfies required version. Skipping local helm install. [2025-09-22 02:47:25] INFO -- CNTI-Setup.create_namespace: Creating namespace for CNTI testsuite [2025-09-22 02:47:25] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-09-22 02:47:25] INFO -- CNTI-KubectlClient.Apply.namespace: Apply namespace: cnf-testsuite [2025-09-22 02:47:25] INFO -- CNTI-KubectlClient.Utils.label: Label namespace/cnf-testsuite with pod-security.kubernetes.io/enforce=privileged [2025-09-22 02:47:25] INFO -- CNTI-Setup.cnf_install: Installing CNF to cluster [2025-09-22 02:47:25] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:47:25] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:47:25] DEBUG -- CNTI: find output: [2025-09-22 02:47:25] WARN -- CNTI: find stderr: find: installed_cnf_files/*: No such file or directory [2025-09-22 02:47:25] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: [] [2025-09-22 02:47:25] INFO -- CNTI: ClusterTools install [2025-09-22 02:47:25] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource namespaces [2025-09-22 02:47:25] DEBUG -- CNTI: ClusterTools ensure_namespace_exists namespace_array: [{"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"cnf-testsuite\"}}\n"}, "creationTimestamp" => "2025-09-22T02:47:15Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-testsuite", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-testsuite", "resourceVersion" => "6034230", "uid" => "a47765ed-7349-4df3-a8f5-e815875705b3"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-08-14T10:01:12Z", "labels" => {"kubernetes.io/metadata.name" => "default"}, "name" => "default", "resourceVersion" => "20", "uid" => "854ae2dc-15f4-420c-8f58-c250a8f7b1c3"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-09-22T02:39:52Z", "deletionTimestamp" => "2025-09-22T02:47:09Z", "generateName" => "ims-", "labels" => {"kubernetes.io/metadata.name" => "ims-stc5z", "pod-security.kubernetes.io/enforce" => "baseline"}, "name" => "ims-stc5z", "resourceVersion" => "6034226", "uid" => "165df324-b70e-4540-b1b5-f6c28cb983cd"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"conditions" => [{"lastTransitionTime" => "2025-09-22T02:47:14Z", "message" => "All resources successfully discovered", "reason" => "ResourcesDiscovered", "status" => "False", "type" => "NamespaceDeletionDiscoveryFailure"}, {"lastTransitionTime" => "2025-09-22T02:47:14Z", "message" => "All legacy kube types successfully parsed", "reason" => "ParsedGroupVersions", "status" => "False", "type" => "NamespaceDeletionGroupVersionParsingFailure"}, {"lastTransitionTime" => "2025-09-22T02:47:14Z", "message" => "All content successfully deleted, may be waiting on finalization", "reason" => "ContentDeleted", "status" => "False", "type" => "NamespaceDeletionContentFailure"}, {"lastTransitionTime" => "2025-09-22T02:47:14Z", "message" => "All content successfully removed", "reason" => "ContentRemoved", "status" => "False", "type" => "NamespaceContentRemaining"}, {"lastTransitionTime" => "2025-09-22T02:47:14Z", "message" => "All content-preserving finalizers finished", "reason" => "ContentHasNoFinalizers", "status" => "False", "type" => "NamespaceFinalizersRemaining"}], "phase" => "Terminating"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-08-14T10:01:12Z", "labels" => {"kubernetes.io/metadata.name" => "kube-node-lease"}, "name" => "kube-node-lease", "resourceVersion" => "27", "uid" => "c1130257-aaa7-40b1-8a9a-d4cb9de8c5b9"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-08-14T10:01:12Z", "labels" => {"kubernetes.io/metadata.name" => "kube-public"}, "name" => "kube-public", "resourceVersion" => "12", "uid" => "eb7f560d-60ea-4c44-8abd-afb9b8a4f197"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-08-14T10:01:12Z", "labels" => {"kubernetes.io/metadata.name" => "kube-system"}, "name" => "kube-system", "resourceVersion" => "4", "uid" => "775d0b7a-a3fe-4870-91da-260ed7d8a71e"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"local-path-storage\"}}\n"}, "creationTimestamp" => "2025-08-14T10:01:17Z", "labels" => {"kubernetes.io/metadata.name" => "local-path-storage"}, "name" => "local-path-storage", "resourceVersion" => "291", "uid" => "4772814a-342a-45ba-8647-3b9bbce45548"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}] [2025-09-22 02:47:25] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file cluster_tools.yml [2025-09-22 02:47:26] WARN -- CNTI-KubectlClient.Apply.file.cmd: stderr: Warning: would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), privileged (container "cluster-tools" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "cluster-tools" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "cluster-tools" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "proc", "systemd", "hostfs" use restricted volume type "hostPath"), runAsNonRoot != true (pod or container "cluster-tools" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "cluster-tools" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") [2025-09-22 02:47:26] INFO -- CNTI: ClusterTools wait_for_cluster_tools [2025-09-22 02:47:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource namespaces [2025-09-22 02:47:26] DEBUG -- CNTI: ClusterTools ensure_namespace_exists namespace_array: [{"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"cnf-testsuite\"}}\n"}, "creationTimestamp" => "2025-09-22T02:47:15Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-testsuite", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-testsuite", "resourceVersion" => "6034230", "uid" => "a47765ed-7349-4df3-a8f5-e815875705b3"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-08-14T10:01:12Z", "labels" => {"kubernetes.io/metadata.name" => "default"}, "name" => "default", "resourceVersion" => "20", "uid" => "854ae2dc-15f4-420c-8f58-c250a8f7b1c3"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-09-22T02:39:52Z", "deletionTimestamp" => "2025-09-22T02:47:09Z", "generateName" => "ims-", "labels" => {"kubernetes.io/metadata.name" => "ims-stc5z", "pod-security.kubernetes.io/enforce" => "baseline"}, "name" => "ims-stc5z", "resourceVersion" => "6034226", "uid" => "165df324-b70e-4540-b1b5-f6c28cb983cd"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"conditions" => [{"lastTransitionTime" => "2025-09-22T02:47:14Z", "message" => "All resources successfully discovered", "reason" => "ResourcesDiscovered", "status" => "False", "type" => "NamespaceDeletionDiscoveryFailure"}, {"lastTransitionTime" => "2025-09-22T02:47:14Z", "message" => "All legacy kube types successfully parsed", "reason" => "ParsedGroupVersions", "status" => "False", "type" => "NamespaceDeletionGroupVersionParsingFailure"}, {"lastTransitionTime" => "2025-09-22T02:47:14Z", "message" => "All content successfully deleted, may be waiting on finalization", "reason" => "ContentDeleted", "status" => "False", "type" => "NamespaceDeletionContentFailure"}, {"lastTransitionTime" => "2025-09-22T02:47:14Z", "message" => "All content successfully removed", "reason" => "ContentRemoved", "status" => "False", "type" => "NamespaceContentRemaining"}, {"lastTransitionTime" => "2025-09-22T02:47:14Z", "message" => "All content-preserving finalizers finished", "reason" => "ContentHasNoFinalizers", "status" => "False", "type" => "NamespaceFinalizersRemaining"}], "phase" => "Terminating"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-08-14T10:01:12Z", "labels" => {"kubernetes.io/metadata.name" => "kube-node-lease"}, "name" => "kube-node-lease", "resourceVersion" => "27", "uid" => "c1130257-aaa7-40b1-8a9a-d4cb9de8c5b9"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-08-14T10:01:12Z", "labels" => {"kubernetes.io/metadata.name" => "kube-public"}, "name" => "kube-public", "resourceVersion" => "12", "uid" => "eb7f560d-60ea-4c44-8abd-afb9b8a4f197"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-08-14T10:01:12Z", "labels" => {"kubernetes.io/metadata.name" => "kube-system"}, "name" => "kube-system", "resourceVersion" => "4", "uid" => "775d0b7a-a3fe-4870-91da-260ed7d8a71e"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"local-path-storage\"}}\n"}, "creationTimestamp" => "2025-08-14T10:01:17Z", "labels" => {"kubernetes.io/metadata.name" => "local-path-storage"}, "name" => "local-path-storage", "resourceVersion" => "291", "uid" => "4772814a-342a-45ba-8647-3b9bbce45548"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}] [2025-09-22 02:47:26] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Waiting for resource Daemonset/cluster-tools to install [2025-09-22 02:47:26] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-09-22 02:47:26] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-09-22 02:47:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-09-22 02:47:26] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: seconds elapsed while waiting: 0 [2025-09-22 02:47:27] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-09-22 02:47:27] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-09-22 02:47:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-09-22 02:47:28] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-09-22 02:47:28] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-09-22 02:47:28] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools ClusterTools installed CNF installation start. Installing deployment "coredns". [2025-09-22 02:47:28] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Daemonset/cluster-tools is ready [2025-09-22 02:47:28] DEBUG -- CNTI-CNFInstall.parsed_cli_args: Parsed args: {config_path: "example-cnfs/coredns/cnf-testsuite.yml", timeout: 1800, skip_wait_for_install: false} [2025-09-22 02:47:28] INFO -- CNTI-Helm.helm_repo_add: Adding helm repository: stable [2025-09-22 02:47:28] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-09-22 02:47:28] DEBUG -- CNTI-Helm.helm_repo_add.cmd: command: helm repo add stable https://cncf.gitlab.io/stable [2025-09-22 02:47:29] INFO -- CNTI-Helm.pull: Pulling helm chart: stable/coredns [2025-09-22 02:47:29] DEBUG -- CNTI-Helm.pull.cmd: command: helm pull stable/coredns --untar --destination installed_cnf_files/deployments/coredns [2025-09-22 02:47:29] INFO -- CNTI-CNFManager.ensure_namespace_exists!: Ensure that namespace: cnf-default exists on the cluster for the CNF install [2025-09-22 02:47:29] INFO -- CNTI-KubectlClient.Apply.namespace: Apply namespace: cnf-default [2025-09-22 02:47:29] INFO -- CNTI-KubectlClient.Utils.label: Label namespace/cnf-default with pod-security.kubernetes.io/enforce=privileged [2025-09-22 02:47:30] INFO -- CNTI-Helm.install: Installing helm chart: installed_cnf_files/deployments/coredns/coredns [2025-09-22 02:47:30] DEBUG -- CNTI-Helm.install: Values: [2025-09-22 02:47:30] DEBUG -- CNTI-Helm.install.cmd: command: helm install coredns installed_cnf_files/deployments/coredns/coredns -n cnf-default [2025-09-22 02:47:30] WARN -- CNTI-Helm.install.cmd: stderr: W0922 02:47:30.666517 505 warnings.go:70] spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead W0922 02:47:30.666594 505 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "coredns" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "coredns" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "coredns" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "coredns" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") [2025-09-22 02:47:30] INFO -- CNTI-Helm.generate_manifest: Generating manifest from installed CNF: coredns [2025-09-22 02:47:30] DEBUG -- CNTI-Helm.cmd: command: helm get manifest coredns --namespace cnf-default [2025-09-22 02:47:30] INFO -- CNTI-Helm.generate_manifest: Manifest was generated successfully [2025-09-22 02:47:30] INFO -- CNTI-CNFInstall.add_namespace_to_resources: Updating metadata.namespace field for resources in generated manifest Waiting for resource for "coredns" deployment (1/1): [Deployment] coredns-coredns [2025-09-22 02:47:30] DEBUG -- CNTI-CNFInstall.add_namespace_to_resources: Added cnf-default namespace for resource: {kind: ConfigMap, name: coredns-coredns} [2025-09-22 02:47:30] DEBUG -- CNTI-CNFInstall.add_namespace_to_resources: Added cnf-default namespace for resource: {kind: Service, name: coredns-coredns} [2025-09-22 02:47:30] DEBUG -- CNTI-CNFInstall.add_namespace_to_resources: Added cnf-default namespace for resource: {kind: Deployment, name: coredns-coredns} [2025-09-22 02:47:30] DEBUG -- CNTI-CNFInstall.add_manifest_to_file: coredns manifest was appended into installed_cnf_files/deployments/coredns/deployment_manifest.yml file [2025-09-22 02:47:30] DEBUG -- CNTI-CNFInstall.add_manifest_to_file: coredns manifest was appended into installed_cnf_files/common_manifest.yml file [2025-09-22 02:47:30] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "ConfigMap", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "ClusterRole", name: "coredns-coredns", namespace: "default"}, {kind: "ClusterRoleBinding", name: "coredns-coredns", namespace: "default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}] [2025-09-22 02:47:30] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Waiting for resource Deployment/coredns-coredns to install [2025-09-22 02:47:30] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-09-22 02:47:30] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-09-22 02:47:30] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:47:31] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: seconds elapsed while waiting: 0 [2025-09-22 02:47:32] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-09-22 02:47:32] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-09-22 02:47:32] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:47:33] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-09-22 02:47:33] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-09-22 02:47:33] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:47:34] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-09-22 02:47:34] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-09-22 02:47:34] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:47:35] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-09-22 02:47:35] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-09-22 02:47:35] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:47:36] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-09-22 02:47:36] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-09-22 02:47:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:47:37] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-09-22 02:47:37] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-09-22 02:47:37] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:47:38] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-09-22 02:47:38] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-09-22 02:47:38] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:47:39] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-09-22 02:47:39] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-09-22 02:47:39] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:47:40] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-09-22 02:47:40] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-09-22 02:47:40] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:47:42] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-09-22 02:47:42] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-09-22 02:47:42] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:47:42] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: seconds elapsed while waiting: 10 [2025-09-22 02:47:43] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-09-22 02:47:43] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-09-22 02:47:43] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:47:44] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-09-22 02:47:44] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-09-22 02:47:44] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:47:45] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-09-22 02:47:45] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-09-22 02:47:45] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns All "coredns" deployment resources are up. CNF installation complete. [2025-09-22 02:47:45] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Deployment/coredns-coredns is ready [2025-09-22 02:47:45] INFO -- CNTI-Setup.cnf_install: CNF installed successfuly 2025-09-22 02:52:37,322 - functest_kubernetes.cnf_conformance.conformance - INFO - cnf-testsuite cert -l debug CNF TestSuite version: v1.4.5-beta2 Compatibility, Installability & Upgradability Tests [2025-09-22 02:47:45] INFO -- CNTI: VERSION: v1.4.5-beta2 [2025-09-22 02:47:45] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["cni_compatible", "increase_decrease_capacity", "rolling_update", "rolling_downgrade", "rolling_version_change", "rollback", "deprecated_k8s_features", "helm_deploy", "helm_chart_valid", "helm_chart_published"] for tag: compatibility [2025-09-22 02:47:45] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:47:45] DEBUG -- CNTI-CNFManager.Points.Results.file: Results file created: results/cnf-testsuite-results-20250922-024745-528.yml [2025-09-22 02:47:45] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:47:45] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:47:45] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:47:45] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:47:45] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:47:45] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:47:45] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:47:45] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:47:45] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [increase_decrease_capacity] [2025-09-22 02:47:45] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:47:45] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:47:45] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:47:45] INFO -- CNTI-CNFManager.Task.task_runner.increase_decrease_capacity: Starting test [2025-09-22 02:47:45] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-09-22 02:47:45] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:47:45] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-09-22 02:47:45] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:47:45] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-09-22 02:47:45] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:47:45] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-09-22 02:47:45] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:47:45] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-09-22 02:47:45] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:47:45] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-09-22 02:47:45] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:47:45] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:47:45] INFO -- CNTI-change_capacity:resource: Deployment/coredns-coredns; namespace: cnf-default [2025-09-22 02:47:45] INFO -- CNTI-change_capacity:capacity: Base replicas: 1; Target replicas: 3 [2025-09-22 02:47:45] INFO -- CNTI-KubectlClient.Utils.scale: Scale Deployment/coredns-coredns to 1 replicas [2025-09-22 02:47:45] DEBUG -- CNTI: target_replica_count: 1 [2025-09-22 02:47:45] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-09-22 02:47:45] DEBUG -- CNTI: Deployment initialized to 1 [2025-09-22 02:47:45] INFO -- CNTI-KubectlClient.Utils.scale: Scale Deployment/coredns-coredns to 3 replicas [2025-09-22 02:47:45] WARN -- CNTI-KubectlClient.Utils.scale.cmd: stderr: Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead [2025-09-22 02:47:45] DEBUG -- CNTI: target_replica_count: 3 [2025-09-22 02:47:46] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-09-22 02:47:48] DEBUG -- CNTI: Time left: 58 seconds [2025-09-22 02:47:48] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-09-22 02:47:50] DEBUG -- CNTI: Time left: 56 seconds [2025-09-22 02:47:50] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-09-22 02:47:52] DEBUG -- CNTI: Time left: 54 seconds [2025-09-22 02:47:52] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-09-22 02:47:54] DEBUG -- CNTI: Time left: 52 seconds [2025-09-22 02:47:54] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-09-22 02:47:56] DEBUG -- CNTI: Time left: 50 seconds [2025-09-22 02:47:56] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-09-22 02:47:58] DEBUG -- CNTI: Time left: 48 seconds [2025-09-22 02:47:58] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-09-22 02:48:00] DEBUG -- CNTI: Time left: 46 seconds [2025-09-22 02:48:00] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-09-22 02:48:02] DEBUG -- CNTI: Time left: 58 seconds [2025-09-22 02:48:02] DEBUG -- CNTI: current_replicas before get Deployment: 3 [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:48:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-09-22 02:48:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:48:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-09-22 02:48:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:48:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-09-22 02:48:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:48:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-09-22 02:48:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:48:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-09-22 02:48:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:48:03] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:48:03] INFO -- CNTI-change_capacity:resource: Deployment/coredns-coredns; namespace: cnf-default [2025-09-22 02:48:03] INFO -- CNTI-change_capacity:capacity: Base replicas: 3; Target replicas: 1 [2025-09-22 02:48:03] INFO -- CNTI-KubectlClient.Utils.scale: Scale Deployment/coredns-coredns to 3 replicas [2025-09-22 02:48:03] DEBUG -- CNTI: target_replica_count: 3 [2025-09-22 02:48:03] DEBUG -- CNTI: current_replicas before get Deployment: 3 [2025-09-22 02:48:03] DEBUG -- CNTI: Deployment initialized to 3 [2025-09-22 02:48:03] INFO -- CNTI-KubectlClient.Utils.scale: Scale Deployment/coredns-coredns to 1 replicas [2025-09-22 02:48:03] WARN -- CNTI-KubectlClient.Utils.scale.cmd: stderr: Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead [2025-09-22 02:48:03] DEBUG -- CNTI: target_replica_count: 1 [2025-09-22 02:48:03] DEBUG -- CNTI: current_replicas before get Deployment: 1 ✔️ 🏆PASSED: [increase_decrease_capacity] Replicas increased to 3 and decreased to 1 📦📈📉 Compatibility, installability, and upgradeability results: 1 of 1 tests passed  State Tests [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'increase_decrease_capacity' emoji: 📦📈📉 [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'increase_decrease_capacity' tags: ["compatibility", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points: Task: 'increase_decrease_capacity' type: essential [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'increase_decrease_capacity' tags: ["compatibility", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points: Task: 'increase_decrease_capacity' type: essential [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.upsert_task-increase_decrease_capacity: Task start time: 2025-09-22 02:47:45 UTC, end time: 2025-09-22 02:48:03 UTC [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.upsert_task-increase_decrease_capacity: Task: 'increase_decrease_capacity' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:18.132494315 [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["cni_compatible", "increase_decrease_capacity", "rolling_update", "rolling_downgrade", "rolling_version_change", "rollback", "deprecated_k8s_features", "helm_deploy", "helm_chart_valid", "helm_chart_published"] for tag: compatibility [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["increase_decrease_capacity"] for tags: ["compatibility", "cert"] [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["compatibility", "cert"] [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["cni_compatible", "increase_decrease_capacity", "rolling_update", "rolling_downgrade", "rolling_version_change", "rollback", "deprecated_k8s_features", "helm_deploy", "helm_chart_valid", "helm_chart_published"] for tag: compatibility [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["compatibility", "cert"] [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["cni_compatible", "increase_decrease_capacity", "rolling_update", "rolling_downgrade", "rolling_version_change", "rollback", "deprecated_k8s_features", "helm_deploy", "helm_chart_valid", "helm_chart_published"] for tag: compatibility [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["increase_decrease_capacity"] for tags: ["compatibility", "cert"] [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["compatibility", "cert"] [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["cni_compatible", "increase_decrease_capacity", "rolling_update", "rolling_downgrade", "rolling_version_change", "rollback", "deprecated_k8s_features", "helm_deploy", "helm_chart_valid", "helm_chart_published"] for tag: compatibility [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["compatibility", "cert"] [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["essential"] [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: selinux_options is worth: 100 points [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-09-22 02:48:03] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1900, max tasks passed: 19 for tags: ["essential"] [2025-09-22 02:48:03] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => nil, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:48:03] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:48:03] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:48:03] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 100} [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["no_local_volume_configuration", "elastic_volumes", "database_persistence", "node_drain"] for tag: state [2025-09-22 02:48:03] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:48:03] INFO -- CNTI: install litmus [2025-09-22 02:48:03] INFO -- CNTI-KubectlClient.Apply.namespace: Apply namespace: litmus [2025-09-22 02:48:04] INFO -- CNTI-Label.namespace: command: kubectl label namespace litmus pod-security.kubernetes.io/enforce=privileged [2025-09-22 02:48:04] DEBUG -- CNTI-Label.namespace: output: namespace/litmus labeled [2025-09-22 02:48:04] INFO -- CNTI: install litmus operator [2025-09-22 02:48:04] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file https://litmuschaos.github.io/litmus/litmus-operator-v3.6.0.yaml [2025-09-22 02:48:05] WARN -- CNTI-KubectlClient.Apply.file.cmd: stderr: Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "chaos-operator" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "chaos-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "chaos-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "chaos-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") [2025-09-22 02:48:05] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:48:05] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:48:05] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:48:05] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:48:05] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:48:05] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:48:05] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:48:05] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:48:05] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [node_drain] [2025-09-22 02:48:05] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:48:05] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:48:05] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:48:05] INFO -- CNTI-CNFManager.Task.task_runner.node_drain: Starting test [2025-09-22 02:48:05] INFO -- CNTI-CNFManager.workload_resource_test: Starting test [2025-09-22 02:48:05] INFO -- CNTI-CNFManager.resource_refs: Yielding resources: ["replicaset", "deployment", "statefulset", "pod", "daemonset"] [2025-09-22 02:48:05] DEBUG -- CNTI-CNFManager.cnf_resources: Map block to CNF resources [2025-09-22 02:48:05] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:48:05] DEBUG -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-09-22 02:48:05] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-09-22 02:48:05] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:48:05] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-09-22 02:48:05] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:48:05] INFO -- CNTI: Current Resource Name: Deployment/coredns-coredns Namespace: cnf-default [2025-09-22 02:48:05] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-09-22 02:48:05] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:48:05] DEBUG -- CNTI-KubectlClient.Get.schedulable_nodes_list: Retrieving list of schedulable nodes [2025-09-22 02:48:05] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-09-22 02:48:05] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-09-22 02:48:06] INFO -- CNTI-KubectlClient.Get.schedulable_nodes_list: Retrieved schedulable nodes list: latest-worker, latest-worker2 [2025-09-22 02:48:06] INFO -- CNTI: Getting the operator node name: kubectl get pods -l app.kubernetes.io/instance=coredns -n cnf-default -o=jsonpath='{.items[0].spec.nodeName}' [2025-09-22 02:48:06] DEBUG -- CNTI: status_code: 0 [2025-09-22 02:48:06] INFO -- CNTI: Found node to cordon latest-worker using label app.kubernetes.io/instance='coredns' in cnf-default namespace. [2025-09-22 02:48:06] INFO -- CNTI-KubectlClient.Utils.cordon: Cordon node latest-worker [2025-09-22 02:48:06] INFO -- CNTI: Cordoned node latest-worker successfully. [2025-09-22 02:48:06] DEBUG -- CNTI-node_drain: Getting the app node name kubectl get pods -l app.kubernetes.io/instance=coredns -n cnf-default -o=jsonpath='{.items[0].spec.nodeName}' [2025-09-22 02:48:07] DEBUG -- CNTI-node_drain: status_code: 0 [2025-09-22 02:48:07] DEBUG -- CNTI-node_drain: Getting the app node name kubectl get pods -n litmus -l app.kubernetes.io/name=litmus -o=jsonpath='{.items[0].spec.nodeName}' [2025-09-22 02:48:07] DEBUG -- CNTI-node_drain: status_code: 0 [2025-09-22 02:48:07] INFO -- CNTI: Workload Node Name: latest-worker [2025-09-22 02:48:07] INFO -- CNTI: Litmus Node Name: latest-worker2 [2025-09-22 02:48:07] INFO -- CNTI: download_template url, filename: https://raw.githubusercontent.com/litmuschaos/chaos-charts/3.6.0/faults/kubernetes/node-drain/fault.yaml, node_drain_experiment.yaml [2025-09-22 02:48:07] INFO -- CNTI: chaos_manifests_path [2025-09-22 02:48:07] INFO -- CNTI: filepath: /home/xtesting/.cnf-testsuite/tools/chaos-experiments/node_drain_experiment.yaml [2025-09-22 02:48:07] DEBUG -- CNTI-http.client: Performing request [2025-09-22 02:48:07] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file /home/xtesting/.cnf-testsuite/tools/chaos-experiments/node_drain_experiment.yaml [2025-09-22 02:48:07] INFO -- CNTI: download_template url, filename: https://raw.githubusercontent.com/litmuschaos/chaos-charts/2.6.0/charts/generic/node-drain/rbac.yaml, node_drain_rbac.yaml [2025-09-22 02:48:07] INFO -- CNTI: chaos_manifests_path [2025-09-22 02:48:07] INFO -- CNTI: filepath: /home/xtesting/.cnf-testsuite/tools/chaos-experiments/node_drain_rbac.yaml [2025-09-22 02:48:07] DEBUG -- CNTI-http.client: Performing request [2025-09-22 02:48:07] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file /home/xtesting/.cnf-testsuite/tools/chaos-experiments/node_drain_rbac.yaml [2025-09-22 02:48:08] INFO -- CNTI-KubectlClient.Utils.annotate: Annotate Deployment/coredns-coredns with litmuschaos.io/chaos="true" [2025-09-22 02:48:08] WARN -- CNTI-KubectlClient.Utils.annotate.cmd: stderr: Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "coredns" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "coredns" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "coredns" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "coredns" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") [2025-09-22 02:48:08] INFO -- CNTI-node_drain: Chaos test name: coredns-coredns-9b44d58e; Experiment name: node-drain; Label app.kubernetes.io/instance=coredns; namespace: cnf-default [2025-09-22 02:48:08] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file installed_cnf_files/temp_files/node-drain-chaosengine.yml [2025-09-22 02:48:08] INFO -- CNTI: wait_for_test: coredns-coredns-9b44d58e-node-drain [2025-09-22 02:48:08] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:08] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:10] DEBUG -- CNTI: Time left: 1798 seconds [2025-09-22 02:48:10] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:10] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:12] DEBUG -- CNTI: Time left: 1796 seconds [2025-09-22 02:48:12] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:12] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:14] DEBUG -- CNTI: Time left: 1794 seconds [2025-09-22 02:48:14] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:14] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:16] DEBUG -- CNTI: Time left: 1792 seconds [2025-09-22 02:48:16] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:17] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:19] DEBUG -- CNTI: Time left: 1790 seconds [2025-09-22 02:48:19] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:19] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:21] DEBUG -- CNTI: Time left: 1788 seconds [2025-09-22 02:48:21] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:21] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:23] DEBUG -- CNTI: Time left: 1786 seconds [2025-09-22 02:48:23] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:23] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:25] DEBUG -- CNTI: Time left: 1784 seconds [2025-09-22 02:48:25] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:25] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:27] DEBUG -- CNTI: Time left: 1781 seconds [2025-09-22 02:48:27] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:27] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:29] DEBUG -- CNTI: Time left: 1779 seconds [2025-09-22 02:48:29] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:29] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:31] DEBUG -- CNTI: Time left: 1777 seconds [2025-09-22 02:48:31] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:31] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:33] DEBUG -- CNTI: Time left: 1775 seconds [2025-09-22 02:48:33] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:33] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:35] DEBUG -- CNTI: Time left: 1773 seconds [2025-09-22 02:48:35] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:36] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:38] DEBUG -- CNTI: Time left: 1771 seconds [2025-09-22 02:48:38] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:38] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:40] DEBUG -- CNTI: Time left: 1769 seconds [2025-09-22 02:48:40] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:40] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:42] DEBUG -- CNTI: Time left: 1767 seconds [2025-09-22 02:48:42] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:42] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:44] DEBUG -- CNTI: Time left: 1765 seconds [2025-09-22 02:48:44] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:44] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:46] DEBUG -- CNTI: Time left: 1762 seconds [2025-09-22 02:48:46] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:46] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:48] DEBUG -- CNTI: Time left: 1760 seconds [2025-09-22 02:48:48] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:48] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:50] DEBUG -- CNTI: Time left: 1758 seconds [2025-09-22 02:48:50] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:50] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:52] DEBUG -- CNTI: Time left: 1756 seconds [2025-09-22 02:48:52] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:53] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:55] DEBUG -- CNTI: Time left: 1754 seconds [2025-09-22 02:48:55] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:55] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:57] DEBUG -- CNTI: Time left: 1752 seconds [2025-09-22 02:48:57] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:57] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:48:59] DEBUG -- CNTI: Time left: 1750 seconds [2025-09-22 02:48:59] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:48:59] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:01] DEBUG -- CNTI: Time left: 1748 seconds [2025-09-22 02:49:01] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:01] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:03] DEBUG -- CNTI: Time left: 1745 seconds [2025-09-22 02:49:03] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:03] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:05] DEBUG -- CNTI: Time left: 1743 seconds [2025-09-22 02:49:05] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:05] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:07] DEBUG -- CNTI: Time left: 1741 seconds [2025-09-22 02:49:07] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:07] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:09] DEBUG -- CNTI: Time left: 1739 seconds [2025-09-22 02:49:09] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:09] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:11] DEBUG -- CNTI: Time left: 1737 seconds [2025-09-22 02:49:11] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:12] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:14] DEBUG -- CNTI: Time left: 1735 seconds [2025-09-22 02:49:14] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:14] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:16] DEBUG -- CNTI: Time left: 1733 seconds [2025-09-22 02:49:16] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:16] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:18] DEBUG -- CNTI: Time left: 1731 seconds [2025-09-22 02:49:18] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:18] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:20] DEBUG -- CNTI: Time left: 1729 seconds [2025-09-22 02:49:20] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:20] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:22] DEBUG -- CNTI: Time left: 1726 seconds [2025-09-22 02:49:22] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:22] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:24] DEBUG -- CNTI: Time left: 1724 seconds [2025-09-22 02:49:24] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:24] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:26] DEBUG -- CNTI: Time left: 1722 seconds [2025-09-22 02:49:26] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:26] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:28] DEBUG -- CNTI: Time left: 1720 seconds [2025-09-22 02:49:28] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:28] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:30] DEBUG -- CNTI: Time left: 1718 seconds [2025-09-22 02:49:30] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:31] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:33] DEBUG -- CNTI: Time left: 1716 seconds [2025-09-22 02:49:33] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:33] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:35] DEBUG -- CNTI: Time left: 1714 seconds [2025-09-22 02:49:35] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:35] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:37] DEBUG -- CNTI: Time left: 1712 seconds [2025-09-22 02:49:37] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:37] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:39] DEBUG -- CNTI: Time left: 1710 seconds [2025-09-22 02:49:39] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:39] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:41] DEBUG -- CNTI: Time left: 1707 seconds [2025-09-22 02:49:41] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:41] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:43] DEBUG -- CNTI: Time left: 1705 seconds [2025-09-22 02:49:43] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:43] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:45] DEBUG -- CNTI: Time left: 1703 seconds [2025-09-22 02:49:45] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:45] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:47] DEBUG -- CNTI: Time left: 1701 seconds [2025-09-22 02:49:47] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:48] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:50] DEBUG -- CNTI: Time left: 1699 seconds [2025-09-22 02:49:50] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:50] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:52] DEBUG -- CNTI: Time left: 1697 seconds [2025-09-22 02:49:52] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:52] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:54] DEBUG -- CNTI: Time left: 1695 seconds [2025-09-22 02:49:54] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:54] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:56] DEBUG -- CNTI: Time left: 1693 seconds [2025-09-22 02:49:56] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:56] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:49:58] DEBUG -- CNTI: Time left: 1690 seconds [2025-09-22 02:49:58] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:49:58] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:00] DEBUG -- CNTI: Time left: 1688 seconds [2025-09-22 02:50:00] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:00] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:02] DEBUG -- CNTI: Time left: 1686 seconds [2025-09-22 02:50:02] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:02] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:04] DEBUG -- CNTI: Time left: 1684 seconds [2025-09-22 02:50:04] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:04] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:06] DEBUG -- CNTI: Time left: 1682 seconds [2025-09-22 02:50:06] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:07] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:09] DEBUG -- CNTI: Time left: 1680 seconds [2025-09-22 02:50:09] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:09] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:11] DEBUG -- CNTI: Time left: 1678 seconds [2025-09-22 02:50:11] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:11] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:13] DEBUG -- CNTI: Time left: 1676 seconds [2025-09-22 02:50:13] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:13] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:15] DEBUG -- CNTI: Time left: 1674 seconds [2025-09-22 02:50:15] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:15] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:17] DEBUG -- CNTI: Time left: 1671 seconds [2025-09-22 02:50:17] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:17] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:19] DEBUG -- CNTI: Time left: 1669 seconds [2025-09-22 02:50:19] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:19] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:21] DEBUG -- CNTI: Time left: 1667 seconds [2025-09-22 02:50:21] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:21] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:23] DEBUG -- CNTI: Time left: 1665 seconds [2025-09-22 02:50:23] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:23] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:25] DEBUG -- CNTI: Time left: 1663 seconds [2025-09-22 02:50:25] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:26] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:28] DEBUG -- CNTI: Time left: 1661 seconds [2025-09-22 02:50:28] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:28] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:30] DEBUG -- CNTI: Time left: 1659 seconds [2025-09-22 02:50:30] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:30] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:32] DEBUG -- CNTI: Time left: 1657 seconds [2025-09-22 02:50:32] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:32] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:34] DEBUG -- CNTI: Time left: 1655 seconds [2025-09-22 02:50:34] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:34] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:36] DEBUG -- CNTI: Time left: 1652 seconds [2025-09-22 02:50:36] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:36] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:38] DEBUG -- CNTI: Time left: 1650 seconds [2025-09-22 02:50:38] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:38] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:40] DEBUG -- CNTI: Time left: 1648 seconds [2025-09-22 02:50:40] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:40] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:42] DEBUG -- CNTI: Time left: 1646 seconds [2025-09-22 02:50:42] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:42] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:44] DEBUG -- CNTI: Time left: 1644 seconds [2025-09-22 02:50:44] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:45] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:47] DEBUG -- CNTI: Time left: 1642 seconds [2025-09-22 02:50:47] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:47] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:49] DEBUG -- CNTI: Time left: 1640 seconds [2025-09-22 02:50:49] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:49] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:51] DEBUG -- CNTI: Time left: 1638 seconds [2025-09-22 02:50:51] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:51] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:53] DEBUG -- CNTI: Time left: 1635 seconds [2025-09-22 02:50:53] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:53] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:55] DEBUG -- CNTI: Time left: 1633 seconds [2025-09-22 02:50:55] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:55] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:57] DEBUG -- CNTI: Time left: 1631 seconds [2025-09-22 02:50:57] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:57] INFO -- CNTI: status_code: 0, response: initialized [2025-09-22 02:50:59] DEBUG -- CNTI: Time left: 1629 seconds [2025-09-22 02:50:59] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-9b44d58e -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-09-22 02:50:59] INFO -- CNTI: status_code: 0, response: completed [2025-09-22 02:50:59] INFO -- CNTI: Getting litmus status info: kubectl get chaosresults.litmuschaos.io coredns-coredns-9b44d58e-node-drain -n cnf-default -o 'jsonpath={.status.experimentStatus.verdict}' [2025-09-22 02:51:00] INFO -- CNTI: status_code: 0, response: Pass [2025-09-22 02:51:00] INFO -- CNTI: Getting litmus status info: kubectl get chaosresult.litmuschaos.io coredns-coredns-9b44d58e-node-drain -n cnf-default -o 'jsonpath={.status.experimentStatus.verdict}' [2025-09-22 02:51:00] INFO -- CNTI: status_code: 0, response: Pass [2025-09-22 02:51:00] INFO -- CNTI-KubectlClient.Utils.uncordon: Uncordon node latest-worker ✔️ 🏆PASSED: [node_drain] node_drain chaos test passed 🗡️💀♻ State results: 1 of 1 tests passed  Security Tests [2025-09-22 02:51:00] INFO -- CNTI: Uncordoned node latest-worker successfully. [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'node_drain' emoji: 🗡️💀♻ [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'node_drain' tags: ["state", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points: Task: 'node_drain' type: essential [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'node_drain' tags: ["state", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points: Task: 'node_drain' type: essential [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.upsert_task-node_drain: Task start time: 2025-09-22 02:48:05 UTC, end time: 2025-09-22 02:51:00 UTC [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.upsert_task-node_drain: Task: 'node_drain' has status: 'passed' and is awarded: 100 points.Runtime: 00:02:55.084143622 [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["no_local_volume_configuration", "elastic_volumes", "database_persistence", "node_drain"] for tag: state [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["node_drain"] for tags: ["state", "cert"] [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["state", "cert"] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["no_local_volume_configuration", "elastic_volumes", "database_persistence", "node_drain"] for tag: state [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["state", "cert"] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["no_local_volume_configuration", "elastic_volumes", "database_persistence", "node_drain"] for tag: state [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["node_drain"] for tags: ["state", "cert"] [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["state", "cert"] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["no_local_volume_configuration", "elastic_volumes", "database_persistence", "node_drain"] for tag: state [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["state", "cert"] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 200, total tasks passed: 2 for tags: ["essential"] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: selinux_options is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1900, max tasks passed: 19 for tags: ["essential"] [2025-09-22 02:51:00] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:51:00] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:51:00] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:51:00] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 100} [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["privilege_escalation", "symlink_file_system", "application_credentials", "host_network", "service_account_mapping", "privileged_containers", "non_root_containers", "host_pid_ipc_privileges", "linux_hardening", "cpu_limits", "memory_limits", "immutable_file_systems", "hostpath_mounts", "ingress_egress_blocked", "insecure_capabilities", "sysctls", "container_sock_mounts", "external_ips", "selinux_options"] for tag: security [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:00] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:51:00] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:51:00] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:51:00] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:00] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [privileged_containers] [2025-09-22 02:51:00] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Task.task_runner.privileged_containers: Starting test [2025-09-22 02:51:00] DEBUG -- CNTI: white_list_container_names [] [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.workload_resource_test: Starting test [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.resource_refs: Yielding resources: ["replicaset", "deployment", "statefulset", "pod", "daemonset"] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.cnf_resources: Map block to CNF resources [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-09-22 02:51:00] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-09-22 02:51:00] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:00] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-09-22 02:51:00] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:00] DEBUG -- CNTI-KubectlClient.Get.privileged_containers: Get privileged containers [2025-09-22 02:51:00] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:00] INFO -- CNTI-KubectlClient.Get.privileged_containers: Found 8 privileged containers [2025-09-22 02:51:00] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-09-22 02:51:00] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns ✔️ 🏆PASSED: [privileged_containers] No privileged containers 🔓🔑 [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-09-22 02:51:00] DEBUG -- CNTI: violator list: [] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'privileged_containers' emoji: 🔓🔑 [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'privileged_containers' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points: Task: 'privileged_containers' type: essential [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'privileged_containers' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points: Task: 'privileged_containers' type: essential [2025-09-22 02:51:00] DEBUG -- CNTI-CNFManager.Points.upsert_task-privileged_containers: Task start time: 2025-09-22 02:51:00 UTC, end time: 2025-09-22 02:51:00 UTC [2025-09-22 02:51:00] INFO -- CNTI-CNFManager.Points.upsert_task-privileged_containers: Task: 'privileged_containers' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.535332940 [2025-09-22 02:51:00] INFO -- CNTI-Setup.kubescape_framework_download: Downloading Kubescape testing framework [2025-09-22 02:51:00] DEBUG -- CNTI-http.client: Performing request [2025-09-22 02:51:01] DEBUG -- CNTI-http.client: Performing request [2025-09-22 02:51:01] DEBUG -- CNTI-Setup.kubescape_framework_download: Downloaded Kubescape framework json [2025-09-22 02:51:01] INFO -- CNTI-Setup.kubescape_framework_download: Kubescape framework json has been downloaded [2025-09-22 02:51:01] INFO -- CNTI-Setup.install_kubescape: Installing Kubescape tool [2025-09-22 02:51:01] DEBUG -- CNTI-http.client: Performing request [2025-09-22 02:51:01] DEBUG -- CNTI-http.client: Performing request [2025-09-22 02:51:05] DEBUG -- CNTI-Setup.install_kubescape: Downloaded Kubescape binary [2025-09-22 02:51:05] INFO -- CNTI-ShellCmd.run: command: chmod +x /home/xtesting/.cnf-testsuite/tools/kubescape/kubescape [2025-09-22 02:51:05] DEBUG -- CNTI-ShellCmd.run: output: [2025-09-22 02:51:05] INFO -- CNTI-Setup.install_kubescape: Kubescape tool has been installed [2025-09-22 02:51:05] INFO -- CNTI-Setup.kubescape_scan: Perform Kubescape cluster scan [2025-09-22 02:51:05] INFO -- CNTI: scan command: /home/xtesting/.cnf-testsuite/tools/kubescape/kubescape scan framework nsa --use-from /home/xtesting/.cnf-testsuite/tools/kubescape/nsa.json --output kubescape_results.json --format json --format-version=v1 --exclude-namespaces kube-system,kube-public,kube-node-lease,local-path-storage,litmus,cnf-testsuite [2025-09-22 02:51:09] INFO -- CNTI: output: ────────────────────────────────────────────────── Framework scanned: NSA ┌─────────────────┬────┐ │ Controls │ 25 │ │ Passed │ 11 │ │ Failed │ 9 │ │ Action Required │ 5 │ └─────────────────┴────┘ Failed resources by severity: ┌──────────┬────┐ │ Critical │ 0 │ │ High │ 0 │ │ Medium │ 11 │ │ Low │ 1 │ └──────────┴────┘ Run with '--verbose'/'-v' to see control failures for each resource. ┌──────────┬────────────────────────────────────────────────────┬──────────────────┬───────────────┬────────────────────┐ │ Severity │ Control name │ Failed resources │ All Resources │ Compliance score │ ├──────────┼────────────────────────────────────────────────────┼──────────────────┼───────────────┼────────────────────┤ │ Critical │ Disable anonymous access to Kubelet service │ 0 │ 0 │ Action Required * │ │ Critical │ Enforce Kubelet client TLS authentication │ 0 │ 0 │ Action Required * │ │ Medium │ Prevent containers from allowing command execution │ 2 │ 18 │ 89% │ │ Medium │ Non-root containers │ 1 │ 1 │ 0% │ │ Medium │ Allow privilege escalation │ 1 │ 1 │ 0% │ │ Medium │ Ingress and Egress blocked │ 1 │ 1 │ 0% │ │ Medium │ Automatic mapping of service account │ 3 │ 4 │ 25% │ │ Medium │ Administrative Roles │ 1 │ 18 │ 94% │ │ Medium │ Cluster internal networking │ 1 │ 2 │ 50% │ │ Medium │ Linux hardening │ 1 │ 1 │ 0% │ │ Medium │ Secret/etcd encryption enabled │ 0 │ 0 │ Action Required ** │ │ Medium │ Audit logs enabled │ 0 │ 0 │ Action Required ** │ │ Low │ Immutable container filesystem │ 1 │ 1 │ 0% │ │ Low │ PSP enabled │ 0 │ 0 │ Action Required ** │ ├──────────┼────────────────────────────────────────────────────┼──────────────────┼───────────────┼────────────────────┤ │ │ Resource Summary │ 6 │ 28 │ 54.33% │ └──────────┴────────────────────────────────────────────────────┴──────────────────┴───────────────┴────────────────────┘ 🚨 * This control is scanned exclusively by the Kubescape operator, not the Kubescape CLI. Install the Kubescape operator: https://kubescape.io/docs/install-operator/. 🚨 ** failed to get cloud provider, cluster: kind-latest [2025-09-22 02:51:09] INFO -- CNTI: stderr: {"level":"info","ts":"2025-09-22T02:51:05Z","msg":"Kubescape scanner initializing..."} {"level":"warn","ts":"2025-09-22T02:51:07Z","msg":"Deprecated format version","run":"--format-version=v2"} {"level":"info","ts":"2025-09-22T02:51:08Z","msg":"Initialized scanner"} {"level":"info","ts":"2025-09-22T02:51:08Z","msg":"Loading policies..."} {"level":"info","ts":"2025-09-22T02:51:08Z","msg":"Loaded policies"} {"level":"info","ts":"2025-09-22T02:51:08Z","msg":"Loading exceptions..."} {"level":"info","ts":"2025-09-22T02:51:08Z","msg":"Loaded exceptions"} {"level":"info","ts":"2025-09-22T02:51:08Z","msg":"Loading account configurations..."} {"level":"info","ts":"2025-09-22T02:51:08Z","msg":"Loaded account configurations"} {"level":"info","ts":"2025-09-22T02:51:08Z","msg":"Accessing Kubernetes objects..."} {"level":"info","ts":"2025-09-22T02:51:08Z","msg":"Accessed Kubernetes objects"} {"level":"info","ts":"2025-09-22T02:51:08Z","msg":"Scanning","Cluster":"kind-latest"} {"level":"info","ts":"2025-09-22T02:51:09Z","msg":"Done scanning","Cluster":"kind-latest"} {"level":"info","ts":"2025-09-22T02:51:09Z","msg":"Done aggregating results"} {"level":"info","ts":"2025-09-22T02:51:09Z","msg":"Scan results saved","filename":"kubescape_results.json"} Overall compliance-score (100- Excellent, 0- All failed): 54 {"level":"info","ts":"2025-09-22T02:51:09Z","msg":"Received interrupt signal, exiting..."} [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:09] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:51:09] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:51:09] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:51:09] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:09] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [non_root_containers] Failed resource: Deployment coredns-coredns in cnf-default namespace Remediation: If your application does not need root privileges, make sure to define runAsNonRoot as true or explicitly set the runAsUser using ID 1000 or higher under the PodSecurityContext or container securityContext. In addition, set an explicit value for runAsGroup using ID 1000 or higher. ✖️ 🏆FAILED: [non_root_containers] Found containers running with root user or user with root group membership 🔓🔑 [2025-09-22 02:51:09] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.Task.task_runner.non_root_containers: Starting test [2025-09-22 02:51:09] INFO -- CNTI: kubescape parse [2025-09-22 02:51:09] INFO -- CNTI: kubescape test_by_test_name [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'non_root_containers' emoji: 🔓🔑 [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'non_root_containers' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points: Task: 'non_root_containers' type: essential [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 0 points [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'non_root_containers' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points: Task: 'non_root_containers' type: essential [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points.upsert_task-non_root_containers: Task start time: 2025-09-22 02:51:09 UTC, end time: 2025-09-22 02:51:09 UTC [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.Points.upsert_task-non_root_containers: Task: 'non_root_containers' has status: 'failed' and is awarded: 0 points.Runtime: 00:00:00.040620823 [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:09] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:51:09] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:51:09] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:51:09] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:09] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [cpu_limits] ✔️ 🏆PASSED: [cpu_limits] Containers have CPU limits set 🔓🔑 [2025-09-22 02:51:09] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.Task.task_runner.cpu_limits: Starting test [2025-09-22 02:51:09] INFO -- CNTI: kubescape parse [2025-09-22 02:51:09] INFO -- CNTI: kubescape test_by_test_name [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'cpu_limits' emoji: 🔓🔑 [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'cpu_limits' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points: Task: 'cpu_limits' type: essential [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'cpu_limits' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points: Task: 'cpu_limits' type: essential [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points.upsert_task-cpu_limits: Task start time: 2025-09-22 02:51:09 UTC, end time: 2025-09-22 02:51:09 UTC [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.Points.upsert_task-cpu_limits: Task: 'cpu_limits' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.025644963 [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:09] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:51:09] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:51:09] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:51:09] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:09] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [memory_limits] ✔️ 🏆PASSED: [memory_limits] Containers have memory limits set 🔓🔑 [2025-09-22 02:51:09] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.Task.task_runner.memory_limits: Starting test [2025-09-22 02:51:09] INFO -- CNTI: kubescape parse [2025-09-22 02:51:09] INFO -- CNTI: kubescape test_by_test_name [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'memory_limits' emoji: 🔓🔑 [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'memory_limits' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points: Task: 'memory_limits' type: essential [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'memory_limits' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points: Task: 'memory_limits' type: essential [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Points.upsert_task-memory_limits: Task start time: 2025-09-22 02:51:09 UTC, end time: 2025-09-22 02:51:09 UTC [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.Points.upsert_task-memory_limits: Task: 'memory_limits' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.022169128 [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:09] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:51:09] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:51:09] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:51:09] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:09] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [hostpath_mounts] [2025-09-22 02:51:09] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:09] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:51:09] INFO -- CNTI-CNFManager.Task.task_runner.hostpath_mounts: Starting test [2025-09-22 02:51:09] INFO -- CNTI: scan command: /home/xtesting/.cnf-testsuite/tools/kubescape/kubescape scan control C-0048 --output kubescape_C-0048_results.json --format json --format-version=v1 --exclude-namespaces kube-system,kube-public,kube-node-lease,local-path-storage,litmus,cnf-testsuite ✔️ 🏆PASSED: [hostpath_mounts] Containers do not have hostPath mounts 🔓🔑 [2025-09-22 02:51:11] INFO -- CNTI: output: ────────────────────────────────────────────────── ┌─────────────────┬───┐ │ Controls │ 1 │ │ Passed │ 1 │ │ Failed │ 0 │ │ Action Required │ 0 │ └─────────────────┴───┘ Failed resources by severity: ┌──────────┬───┐ │ Critical │ 0 │ │ High │ 0 │ │ Medium │ 0 │ │ Low │ 0 │ └──────────┴───┘ Run with '--verbose'/'-v' to see control failures for each resource. ┌──────────┬──────────────────┬──────────────────┬───────────────┬──────────────────┐ │ Severity │ Control name │ Failed resources │ All Resources │ Compliance score │ ├──────────┼──────────────────┼──────────────────┼───────────────┼──────────────────┤ │ High │ HostPath mount │ 0 │ 1 │ 100% │ ├──────────┼──────────────────┼──────────────────┼───────────────┼──────────────────┤ │ │ Resource Summary │ 0 │ 1 │ 100.00% │ └──────────┴──────────────────┴──────────────────┴───────────────┴──────────────────┘ [2025-09-22 02:51:11] INFO -- CNTI: stderr: {"level":"info","ts":"2025-09-22T02:51:09Z","msg":"Kubescape scanner initializing..."} {"level":"warn","ts":"2025-09-22T02:51:10Z","msg":"Deprecated format version","run":"--format-version=v2"} {"level":"info","ts":"2025-09-22T02:51:11Z","msg":"Initialized scanner"} {"level":"info","ts":"2025-09-22T02:51:11Z","msg":"Loading policies..."} {"level":"info","ts":"2025-09-22T02:51:11Z","msg":"Loaded policies"} {"level":"info","ts":"2025-09-22T02:51:11Z","msg":"Loading exceptions..."} {"level":"info","ts":"2025-09-22T02:51:11Z","msg":"Loaded exceptions"} {"level":"info","ts":"2025-09-22T02:51:11Z","msg":"Loading account configurations..."} {"level":"info","ts":"2025-09-22T02:51:11Z","msg":"Loaded account configurations"} {"level":"info","ts":"2025-09-22T02:51:11Z","msg":"Accessing Kubernetes objects..."} {"level":"info","ts":"2025-09-22T02:51:11Z","msg":"Accessed Kubernetes objects"} {"level":"info","ts":"2025-09-22T02:51:11Z","msg":"Scanning","Cluster":"kind-latest"} {"level":"info","ts":"2025-09-22T02:51:11Z","msg":"Done scanning","Cluster":"kind-latest"} {"level":"info","ts":"2025-09-22T02:51:11Z","msg":"Done aggregating results"} {"level":"info","ts":"2025-09-22T02:51:11Z","msg":"Scan results saved","filename":"kubescape_C-0048_results.json"} Overall compliance-score (100- Excellent, 0- All failed): 100 {"level":"info","ts":"2025-09-22T02:51:11Z","msg":"Run with '--verbose'/'-v' flag for detailed resources view\n"} [2025-09-22 02:51:11] INFO -- CNTI: kubescape parse [2025-09-22 02:51:11] INFO -- CNTI: kubescape test_by_test_name [2025-09-22 02:51:11] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-09-22 02:51:11] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:51:11] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-09-22 02:51:11] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:11] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-09-22 02:51:11] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:11] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-09-22 02:51:11] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:11] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-09-22 02:51:11] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:11] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-09-22 02:51:11] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:11] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:11] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'hostpath_mounts' emoji: 🔓🔑 [2025-09-22 02:51:11] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hostpath_mounts' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:11] DEBUG -- CNTI-CNFManager.Points: Task: 'hostpath_mounts' type: essential [2025-09-22 02:51:11] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-09-22 02:51:11] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hostpath_mounts' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:11] DEBUG -- CNTI-CNFManager.Points: Task: 'hostpath_mounts' type: essential [2025-09-22 02:51:11] DEBUG -- CNTI-CNFManager.Points.upsert_task-hostpath_mounts: Task start time: 2025-09-22 02:51:09 UTC, end time: 2025-09-22 02:51:11 UTC [2025-09-22 02:51:11] INFO -- CNTI-CNFManager.Points.upsert_task-hostpath_mounts: Task: 'hostpath_mounts' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:02.076591052 [2025-09-22 02:51:11] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:11] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:51:11] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:11] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:11] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:51:11] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:51:11] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:51:11] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:11] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [container_sock_mounts] [2025-09-22 02:51:11] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:11] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:11] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:51:11] INFO -- CNTI-CNFManager.Task.task_runner.container_sock_mounts: Starting test [2025-09-22 02:51:11] DEBUG -- CNTI-http.client: Performing request [2025-09-22 02:51:11] DEBUG -- CNTI-http.client: Performing request [2025-09-22 02:51:12] INFO -- CNTI: TarClient.untar command: tar -xvf /tmp/kyverno0vq76fqr.tar.gz -C /home/xtesting/.cnf-testsuite/tools [2025-09-22 02:51:13] INFO -- CNTI: TarClient.untar output: LICENSE kyverno [2025-09-22 02:51:13] INFO -- CNTI: TarClient.untar stderr: [2025-09-22 02:51:13] INFO -- CNTI: GitClient.clone command: --branch release-1.9 https://github.com/kyverno/policies.git /home/xtesting/.cnf-testsuite/tools/kyverno-policies [2025-09-22 02:51:15] INFO -- CNTI: GitClient.clone output: [2025-09-22 02:51:15] INFO -- CNTI: GitClient.clone stderr: Cloning into '/home/xtesting/.cnf-testsuite/tools/kyverno-policies'... [2025-09-22 02:51:15] INFO -- CNTI-kyverno_policy_path: command: ls /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_cri_sock_mount/disallow_cri_sock_mount.yaml [2025-09-22 02:51:15] INFO -- CNTI-kyverno_policy_path: output: /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_cri_sock_mount/disallow_cri_sock_mount.yaml [2025-09-22 02:51:15] INFO -- CNTI-Kyverno::PolicyAudit.run: command: /home/xtesting/.cnf-testsuite/tools/kyverno apply /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_cri_sock_mount/disallow_cri_sock_mount.yaml --cluster --policy-report ✔️ 🏆PASSED: [container_sock_mounts] Container engine daemon sockets are not mounted as volumes 🔓🔑 [2025-09-22 02:51:17] INFO -- CNTI-Kyverno::PolicyAudit.run: output: Applying 3 policy rules to 28 resources... ---------------------------------------------------------------------- POLICY REPORT: ---------------------------------------------------------------------- apiVersion: wgpolicyk8s.io/v1alpha2 kind: ClusterPolicyReport metadata: name: clusterpolicyreport results: - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509477 summary: error: 0 fail: 0 pass: 84 skip: 168 warn: 0 [2025-09-22 02:51:17] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'container_sock_mounts' emoji: 🔓🔑 [2025-09-22 02:51:17] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'container_sock_mounts' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:17] DEBUG -- CNTI-CNFManager.Points: Task: 'container_sock_mounts' type: essential [2025-09-22 02:51:17] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-09-22 02:51:17] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'container_sock_mounts' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:17] DEBUG -- CNTI-CNFManager.Points: Task: 'container_sock_mounts' type: essential [2025-09-22 02:51:17] DEBUG -- CNTI-CNFManager.Points.upsert_task-container_sock_mounts: Task start time: 2025-09-22 02:51:11 UTC, end time: 2025-09-22 02:51:17 UTC [2025-09-22 02:51:17] INFO -- CNTI-CNFManager.Points.upsert_task-container_sock_mounts: Task: 'container_sock_mounts' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:06.198348809 [2025-09-22 02:51:17] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:17] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:51:17] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:17] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:17] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:51:17] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:51:17] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:51:17] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:17] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [selinux_options] [2025-09-22 02:51:17] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:17] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:17] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:51:17] INFO -- CNTI-CNFManager.Task.task_runner.selinux_options: Starting test [2025-09-22 02:51:17] INFO -- CNTI-kyverno_policy_path: command: ls /home/xtesting/.cnf-testsuite/tools/custom-kyverno-policies/check-selinux-enabled.yml [2025-09-22 02:51:17] INFO -- CNTI-kyverno_policy_path: output: /home/xtesting/.cnf-testsuite/tools/custom-kyverno-policies/check-selinux-enabled.yml [2025-09-22 02:51:17] INFO -- CNTI-Kyverno::PolicyAudit.run: command: /home/xtesting/.cnf-testsuite/tools/kyverno apply /home/xtesting/.cnf-testsuite/tools/custom-kyverno-policies/check-selinux-enabled.yml --cluster --policy-report [2025-09-22 02:51:19] INFO -- CNTI-Kyverno::PolicyAudit.run: output: Applying 1 policy rule to 28 resources... ---------------------------------------------------------------------- POLICY REPORT: ---------------------------------------------------------------------- apiVersion: wgpolicyk8s.io/v1alpha2 kind: ClusterPolicyReport metadata: name: clusterpolicyreport results: - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509479 summary: error: 0 fail: 0 pass: 28 skip: 56 warn: 0 [2025-09-22 02:51:19] INFO -- CNTI-kyverno_policy_path: command: ls /home/xtesting/.cnf-testsuite/tools/kyverno-policies/pod-security/baseline/disallow-selinux/disallow-selinux.yaml [2025-09-22 02:51:19] INFO -- CNTI-kyverno_policy_path: output: /home/xtesting/.cnf-testsuite/tools/kyverno-policies/pod-security/baseline/disallow-selinux/disallow-selinux.yaml [2025-09-22 02:51:19] INFO -- CNTI-Kyverno::PolicyAudit.run: command: /home/xtesting/.cnf-testsuite/tools/kyverno apply /home/xtesting/.cnf-testsuite/tools/kyverno-policies/pod-security/baseline/disallow-selinux/disallow-selinux.yaml --cluster --policy-report ⏭️ 🏆N/A: [selinux_options] Pods are not using SELinux 🔓🔑 Security results: 5 of 6 tests passed  Configuration Tests [2025-09-22 02:51:22] INFO -- CNTI-Kyverno::PolicyAudit.run: output: Applying 2 policy rules to 28 resources... ---------------------------------------------------------------------- POLICY REPORT: ---------------------------------------------------------------------- apiVersion: wgpolicyk8s.io/v1alpha2 kind: ClusterPolicyReport metadata: name: clusterpolicyreport results: - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509482 summary: error: 0 fail: 0 pass: 56 skip: 112 warn: 0 [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:51:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-09-22 02:51:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-09-22 02:51:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-09-22 02:51:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-09-22 02:51:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-09-22 02:51:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:22] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'selinux_options' emoji: 🔓🔑 [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'selinux_options' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points: Task: 'selinux_options' type: essential [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: selinux_options is worth: 0 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'selinux_options' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points: Task: 'selinux_options' type: essential [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.upsert_task-selinux_options: Task start time: 2025-09-22 02:51:17 UTC, end time: 2025-09-22 02:51:22 UTC [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.upsert_task-selinux_options: Task: 'selinux_options' has status: 'na' and is awarded: 0 points.Runtime: 00:00:04.571319465 [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["privilege_escalation", "symlink_file_system", "application_credentials", "host_network", "service_account_mapping", "privileged_containers", "non_root_containers", "host_pid_ipc_privileges", "linux_hardening", "cpu_limits", "memory_limits", "immutable_file_systems", "hostpath_mounts", "ingress_egress_blocked", "insecure_capabilities", "sysctls", "container_sock_mounts", "external_ips", "selinux_options"] for tag: security [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "container_sock_mounts", "selinux_options"] for tags: ["security", "cert"] [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 500, total tasks passed: 5 for tags: ["security", "cert"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["privilege_escalation", "symlink_file_system", "application_credentials", "host_network", "service_account_mapping", "privileged_containers", "non_root_containers", "host_pid_ipc_privileges", "linux_hardening", "cpu_limits", "memory_limits", "immutable_file_systems", "hostpath_mounts", "ingress_egress_blocked", "insecure_capabilities", "sysctls", "container_sock_mounts", "external_ips", "selinux_options"] for tag: security [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 600, max tasks passed: 6 for tags: ["security", "cert"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["privilege_escalation", "symlink_file_system", "application_credentials", "host_network", "service_account_mapping", "privileged_containers", "non_root_containers", "host_pid_ipc_privileges", "linux_hardening", "cpu_limits", "memory_limits", "immutable_file_systems", "hostpath_mounts", "ingress_egress_blocked", "insecure_capabilities", "sysctls", "container_sock_mounts", "external_ips", "selinux_options"] for tag: security [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "container_sock_mounts", "selinux_options"] for tags: ["security", "cert"] [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 500, total tasks passed: 5 for tags: ["security", "cert"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["privilege_escalation", "symlink_file_system", "application_credentials", "host_network", "service_account_mapping", "privileged_containers", "non_root_containers", "host_pid_ipc_privileges", "linux_hardening", "cpu_limits", "memory_limits", "immutable_file_systems", "hostpath_mounts", "ingress_egress_blocked", "insecure_capabilities", "sysctls", "container_sock_mounts", "external_ips", "selinux_options"] for tag: security [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 600, max tasks passed: 6 for tags: ["security", "cert"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 700, total tasks passed: 7 for tags: ["essential"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-09-22 02:51:22] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}]} [2025-09-22 02:51:22] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}]} [2025-09-22 02:51:22] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}]} [2025-09-22 02:51:22] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}], "maximum_points" => 600} [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["versioned_tag", "ip_addresses", "operator_installed", "nodeport_not_used", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "secrets_used", "immutable_configmap", "alpha_k8s_apis", "require_labels", "default_namespace", "latest_tag"] for tag: configuration [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:22] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:51:22] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:51:22] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:51:22] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:22] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [hostport_not_used] [2025-09-22 02:51:22] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Task.task_runner.hostport_not_used: Starting test [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.workload_resource_test: Starting test [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.resource_refs: Yielding resources: ["replicaset", "deployment", "statefulset", "pod", "daemonset"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.cnf_resources: Map block to CNF resources [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-09-22 02:51:22] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-09-22 02:51:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:22] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-09-22 02:51:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:22] INFO -- CNTI-hostport_not_used: hostport_not_used resource: {kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"} [2025-09-22 02:51:22] INFO -- CNTI-hostport_not_used: resource kind: {kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"} [2025-09-22 02:51:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns ✔️ 🏆PASSED: [hostport_not_used] HostPort is not used  [2025-09-22 02:51:22] DEBUG -- CNTI-hostport_not_used: resource: {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"annotations" => {"deployment.kubernetes.io/revision" => "1", "litmuschaos.io/chaos" => "true", "meta.helm.sh/release-name" => "coredns", "meta.helm.sh/release-namespace" => "cnf-default"}, "creationTimestamp" => "2025-09-22T02:47:30Z", "generation" => 4, "labels" => {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/name" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS"}, "name" => "coredns-coredns", "namespace" => "cnf-default", "resourceVersion" => "6034501", "uid" => "1ae8e7af-52a6-4243-8df1-61f8e0998c7a"}, "spec" => {"progressDeadlineSeconds" => 600, "replicas" => 1, "revisionHistoryLimit" => 10, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"}}, "strategy" => {"rollingUpdate" => {"maxSurge" => "25%", "maxUnavailable" => 1}, "type" => "RollingUpdate"}, "template" => {"metadata" => {"annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}, "creationTimestamp" => nil, "labels" => {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"}}, "spec" => {"containers" => [{"args" => ["-conf", "/etc/coredns/Corefile"], "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "livenessProbe" => {"failureThreshold" => 5, "httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "periodSeconds" => 10, "successThreshold" => 1, "timeoutSeconds" => 5}, "name" => "coredns", "ports" => [{"containerPort" => 53, "name" => "udp-53", "protocol" => "UDP"}, {"containerPort" => 53, "name" => "tcp-53", "protocol" => "TCP"}], "readinessProbe" => {"failureThreshold" => 5, "httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "periodSeconds" => 10, "successThreshold" => 1, "timeoutSeconds" => 5}, "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "terminationMessagePath" => "/dev/termination-log", "terminationMessagePolicy" => "File", "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}]}], "dnsPolicy" => "Default", "restartPolicy" => "Always", "schedulerName" => "default-scheduler", "securityContext" => {}, "serviceAccount" => "default", "serviceAccountName" => "default", "terminationGracePeriodSeconds" => 30, "volumes" => [{"configMap" => {"defaultMode" => 420, "items" => [{"key" => "Corefile", "path" => "Corefile"}], "name" => "coredns-coredns"}, "name" => "config-volume"}]}}}, "status" => {"availableReplicas" => 1, "conditions" => [{"lastTransitionTime" => "2025-09-22T02:47:30Z", "lastUpdateTime" => "2025-09-22T02:47:45Z", "message" => "ReplicaSet \"coredns-coredns-64fc886fd4\" has successfully progressed.", "reason" => "NewReplicaSetAvailable", "status" => "True", "type" => "Progressing"}, {"lastTransitionTime" => "2025-09-22T02:47:59Z", "lastUpdateTime" => "2025-09-22T02:47:59Z", "message" => "Deployment has minimum availability.", "reason" => "MinimumReplicasAvailable", "status" => "True", "type" => "Available"}], "observedGeneration" => 4, "readyReplicas" => 1, "replicas" => 1, "updatedReplicas" => 1}} [2025-09-22 02:51:22] DEBUG -- CNTI-hostport_not_used: containers: [{"args" => ["-conf", "/etc/coredns/Corefile"], "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "livenessProbe" => {"failureThreshold" => 5, "httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "periodSeconds" => 10, "successThreshold" => 1, "timeoutSeconds" => 5}, "name" => "coredns", "ports" => [{"containerPort" => 53, "name" => "udp-53", "protocol" => "UDP"}, {"containerPort" => 53, "name" => "tcp-53", "protocol" => "TCP"}], "readinessProbe" => {"failureThreshold" => 5, "httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "periodSeconds" => 10, "successThreshold" => 1, "timeoutSeconds" => 5}, "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "terminationMessagePath" => "/dev/termination-log", "terminationMessagePolicy" => "File", "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}]}] [2025-09-22 02:51:22] DEBUG -- CNTI-hostport_not_used: single_port: {"containerPort" => 53, "name" => "udp-53", "protocol" => "UDP"} [2025-09-22 02:51:22] DEBUG -- CNTI-hostport_not_used: DAS hostPort: [2025-09-22 02:51:22] DEBUG -- CNTI-hostport_not_used: single_port: {"containerPort" => 53, "name" => "tcp-53", "protocol" => "TCP"} [2025-09-22 02:51:22] DEBUG -- CNTI-hostport_not_used: DAS hostPort: [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hostport_not_used' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points: Task: 'hostport_not_used' type: essential [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hostport_not_used' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points: Task: 'hostport_not_used' type: essential [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Points.upsert_task-hostport_not_used: Task start time: 2025-09-22 02:51:22 UTC, end time: 2025-09-22 02:51:22 UTC [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Points.upsert_task-hostport_not_used: Task: 'hostport_not_used' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.357242221 [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:22] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:51:22] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:51:22] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:51:22] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:22] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [hardcoded_ip_addresses_in_k8s_runtime_configuration] [2025-09-22 02:51:22] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:22] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:51:22] INFO -- CNTI-CNFManager.Task.task_runner.hardcoded_ip_addresses_in_k8s_runtime_configuration: Starting test [2025-09-22 02:51:22] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-09-22 02:51:22] DEBUG -- CNTI: Helm Path: helm [2025-09-22 02:51:22] INFO -- CNTI-KubectlClient.Delete.resource: Delete resource namespace/hardcoded-ip-test ✔️ 🏆PASSED: [hardcoded_ip_addresses_in_k8s_runtime_configuration] No hard-coded IP addresses found in the runtime K8s configuration  [2025-09-22 02:51:23] WARN -- CNTI-KubectlClient.Delete.resource.cmd: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): namespaces "hardcoded-ip-test" not found [2025-09-22 02:51:23] WARN -- CNTI-KubectlClient.Delete.resource: Failed to delete resource hardcoded-ip-test: kubectl CMD failed, exit code: 1, error: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): namespaces "hardcoded-ip-test" not found [2025-09-22 02:51:23] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hardcoded_ip_addresses_in_k8s_runtime_configuration' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:23] DEBUG -- CNTI-CNFManager.Points: Task: 'hardcoded_ip_addresses_in_k8s_runtime_configuration' type: essential [2025-09-22 02:51:23] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-09-22 02:51:23] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hardcoded_ip_addresses_in_k8s_runtime_configuration' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:23] DEBUG -- CNTI-CNFManager.Points: Task: 'hardcoded_ip_addresses_in_k8s_runtime_configuration' type: essential [2025-09-22 02:51:23] DEBUG -- CNTI-CNFManager.Points.upsert_task-hardcoded_ip_addresses_in_k8s_runtime_configuration: Task start time: 2025-09-22 02:51:22 UTC, end time: 2025-09-22 02:51:23 UTC [2025-09-22 02:51:23] INFO -- CNTI-CNFManager.Points.upsert_task-hardcoded_ip_addresses_in_k8s_runtime_configuration: Task: 'hardcoded_ip_addresses_in_k8s_runtime_configuration' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.230634414 [2025-09-22 02:51:23] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:23] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:51:23] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:23] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:23] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:51:23] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:51:23] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:51:23] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:23] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [latest_tag] [2025-09-22 02:51:23] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:23] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:23] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:51:23] INFO -- CNTI-CNFManager.Task.task_runner.latest_tag: Starting test [2025-09-22 02:51:23] INFO -- CNTI-kyverno_policy_path: command: ls /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_latest_tag/disallow_latest_tag.yaml [2025-09-22 02:51:23] INFO -- CNTI-kyverno_policy_path: output: /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_latest_tag/disallow_latest_tag.yaml [2025-09-22 02:51:23] INFO -- CNTI-Kyverno::PolicyAudit.run: command: /home/xtesting/.cnf-testsuite/tools/kyverno apply /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_latest_tag/disallow_latest_tag.yaml --cluster --policy-report ✔️ 🏆PASSED: [latest_tag] Container images are not using the latest tag 🏷️ Configuration results: 3 of 3 tests passed  Observability and Diagnostics Tests [2025-09-22 02:51:24] INFO -- CNTI-Kyverno::PolicyAudit.run: output: Applying 2 policy rules to 28 resources... ---------------------------------------------------------------------- POLICY REPORT: ---------------------------------------------------------------------- apiVersion: wgpolicyk8s.io/v1alpha2 kind: ClusterPolicyReport metadata: name: clusterpolicyreport results: - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-plcq4 namespace: kube-system uid: 2c1cc7c7-bf4b-4908-9a19-dfb60919a23a result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-76ng2 namespace: kube-system uid: c8a2508f-02a8-43a5-89cc-e61f71b490bc result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-kcb5w namespace: kube-system uid: cce13eb5-a7db-4c46-9d62-d5b9295bb3e9 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-splj6 namespace: kube-system uid: 6f4f44e8-6c17-46dd-8c69-01d0b2c75bb3 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-klpkw namespace: kube-system uid: 81a923bf-acac-4f52-aae4-fdd7b43657a9 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 58ee2795-427d-4d78-9cfe-7c7c8a812674 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-mbrb2 namespace: kube-system uid: c24555f5-2f2a-46ef-8826-411e1ee1313c result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 272e2558-d0d6-4d1e-933d-8d76a95e8501 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-latest-control-plane namespace: kube-system uid: 8cbbdbe6-b571-44ec-bae5-c3994807ff77 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-pthkx namespace: kube-system uid: c1554c51-e4f3-40cd-a582-212f2257450b result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-latest-control-plane namespace: kube-system uid: 9842cf1b-9ec6-4f07-af0c-612e2ecf0e9b result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-pmhrl namespace: kube-system uid: 2dd81692-e0f5-4e2a-9426-6478b1143b42 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-latest-control-plane namespace: kube-system uid: 7a513cdc-529f-4e6e-9ed6-83771aec52d9 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 70c52fbd-1070-46e6-840d-0de8d83e9546 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-674b8bbfcf-2h8qn namespace: kube-system uid: 0d621515-f034-4f5f-a87d-cde724ecabe9 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-qc9kz namespace: kube-system uid: 52f0a1b4-cd32-484b-a888-f34930ce13c9 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-latest-control-plane namespace: kube-system uid: 3b34b8ec-d436-4a1a-8177-ed0f073f8761 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 1196faf4-d2a7-4623-9e0b-56c9eb731f75 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-pw6dk namespace: kube-system uid: 44428f77-823b-40ff-abf2-5f389ff0206f result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-bt4hs namespace: cnf-testsuite uid: bac83706-d9e4-4d05-baa3-bf5d902ca906 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-mh8tg namespace: cnf-testsuite uid: a64c21a2-5def-48e2-b99a-462b901678e0 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 04ece8b8-3107-4b54-954f-5bbbe1d7f521 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-5bf6l namespace: cnf-default uid: 0412b191-f9bf-4fc1-904b-bf7034dffe7e result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 1ae8e7af-52a6-4243-8df1-61f8e0998c7a result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 0e327bff-0242-4f48-972b-a46756f369ed result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-c928s namespace: litmus uid: a9b58479-bf9f-41b8-bd0b-25f3f6801807 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-kfzml namespace: local-path-storage uid: a82bf736-119b-42ef-a44d-595edffec869 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: a70b316f-7b15-4463-8d6a-f56b649ea8cc result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1758509484 summary: error: 0 fail: 0 pass: 56 skip: 112 warn: 0 [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:51:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-09-22 02:51:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-09-22 02:51:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-09-22 02:51:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-09-22 02:51:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-09-22 02:51:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:24] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'latest_tag' emoji: 🏷️ [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'latest_tag' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points: Task: 'latest_tag' type: essential [2025-09-22 02:51:24] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'latest_tag' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points: Task: 'latest_tag' type: essential [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points.upsert_task-latest_tag: Task start time: 2025-09-22 02:51:23 UTC, end time: 2025-09-22 02:51:24 UTC [2025-09-22 02:51:24] INFO -- CNTI-CNFManager.Points.upsert_task-latest_tag: Task: 'latest_tag' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:01.902068468 [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["versioned_tag", "ip_addresses", "operator_installed", "nodeport_not_used", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "secrets_used", "immutable_configmap", "alpha_k8s_apis", "require_labels", "default_namespace", "latest_tag"] for tag: configuration [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "latest_tag"] for tags: ["configuration", "cert"] [2025-09-22 02:51:24] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 300, total tasks passed: 3 for tags: ["configuration", "cert"] [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["versioned_tag", "ip_addresses", "operator_installed", "nodeport_not_used", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "secrets_used", "immutable_configmap", "alpha_k8s_apis", "require_labels", "default_namespace", "latest_tag"] for tag: configuration [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:24] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:51:24] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:51:24] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:51:24] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:51:24] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:51:24] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:51:24] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-09-22 02:51:24] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:51:24] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 300, max tasks passed: 3 for tags: ["configuration", "cert"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["versioned_tag", "ip_addresses", "operator_installed", "nodeport_not_used", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "secrets_used", "immutable_configmap", "alpha_k8s_apis", "require_labels", "default_namespace", "latest_tag"] for tag: configuration [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "latest_tag"] for tags: ["configuration", "cert"] [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 300, total tasks passed: 3 for tags: ["configuration", "cert"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["versioned_tag", "ip_addresses", "operator_installed", "nodeport_not_used", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "secrets_used", "immutable_configmap", "alpha_k8s_apis", "require_labels", "default_namespace", "latest_tag"] for tag: configuration [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 300, max tasks passed: 3 for tags: ["configuration", "cert"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1000, total tasks passed: 10 for tags: ["essential"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-09-22 02:51:25] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:51:25] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 300, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:51:25] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 300, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:51:25] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 300, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 300} [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["log_output", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: observability [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:25] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:51:25] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:51:25] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:51:25] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:25] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [log_output] [2025-09-22 02:51:25] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Task.task_runner.log_output: Starting test [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.workload_resource_test: Starting test [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.resource_refs: Yielding resources: ["replicaset", "deployment", "statefulset", "pod", "daemonset"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.cnf_resources: Map block to CNF resources [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-09-22 02:51:25] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-09-22 02:51:25] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:25] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-09-22 02:51:25] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:25] DEBUG -- CNTI-KubectlClient.Utils.logs: Dump logs of Deployment/coredns-coredns ✔️ 🏆PASSED: [log_output] Resources output logs to stdout and stderr 📶☠️ Observability and diagnostics results: 1 of 1 tests passed  Microservice Tests [2025-09-22 02:51:25] INFO -- CNTI-Log lines: [pod/coredns-coredns-64fc886fd4-5bf6l/coredns] W0922 02:47:34.840209 1 warnings.go:67] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice [pod/coredns-coredns-64fc886fd4-5bf6l/coredns] .:53 [pod/coredns-coredns-64fc886fd4-5bf6l/coredns] [INFO] plugin/reload: Running configuration MD5 = d8c79061f144bdb41e9378f9aa781f71 [pod/coredns-coredns-64fc886fd4-5bf6l/coredns] CoreDNS-1.7.1 [pod/coredns-coredns-64fc886fd4-5bf6l/coredns] linux/amd64, go1.15.2, aa82ca6 [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'log_output' emoji: 📶☠️ [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'log_output' tags: ["observability", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points: Task: 'log_output' type: essential [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'log_output' tags: ["observability", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points: Task: 'log_output' type: essential [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.upsert_task-log_output: Task start time: 2025-09-22 02:51:25 UTC, end time: 2025-09-22 02:51:25 UTC [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.upsert_task-log_output: Task: 'log_output' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.376898444 [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["log_output", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: observability [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["log_output"] for tags: ["observability", "cert"] [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["observability", "cert"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["log_output", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: observability [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["observability", "cert"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["log_output", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: observability [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["log_output"] for tags: ["observability", "cert"] [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["observability", "cert"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["log_output", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: observability [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["observability", "cert"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1100, total tasks passed: 11 for tags: ["essential"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-09-22 02:51:25] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-09-22 02:51:25] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 300, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:51:25] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:51:25] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:51:25] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 100} [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["reasonable_image_size", "specialized_init_system", "reasonable_startup_time", "single_process_type", "zombie_handled", "service_discovery", "shared_database", "sig_term_handled"] for tag: microservice [2025-09-22 02:51:25] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:51:25] INFO -- CNTI-Setup.install_cluster_tools: Installing cluster_tools on the cluster [2025-09-22 02:51:25] INFO -- CNTI: ClusterTools install [2025-09-22 02:51:25] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource namespaces [2025-09-22 02:51:25] DEBUG -- CNTI: ClusterTools ensure_namespace_exists namespace_array: [{"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"cnf-default\"}}\n"}, "creationTimestamp" => "2025-09-22T02:47:29Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-default", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-default", "resourceVersion" => "6034283", "uid" => "7b6a0272-d135-41ea-a551-c9cafecf72c9"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"cnf-testsuite\"}}\n"}, "creationTimestamp" => "2025-09-22T02:47:15Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-testsuite", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-testsuite", "resourceVersion" => "6034230", "uid" => "a47765ed-7349-4df3-a8f5-e815875705b3"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-08-14T10:01:12Z", "labels" => {"kubernetes.io/metadata.name" => "default"}, "name" => "default", "resourceVersion" => "20", "uid" => "854ae2dc-15f4-420c-8f58-c250a8f7b1c3"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-08-14T10:01:12Z", "labels" => {"kubernetes.io/metadata.name" => "kube-node-lease"}, "name" => "kube-node-lease", "resourceVersion" => "27", "uid" => "c1130257-aaa7-40b1-8a9a-d4cb9de8c5b9"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-08-14T10:01:12Z", "labels" => {"kubernetes.io/metadata.name" => "kube-public"}, "name" => "kube-public", "resourceVersion" => "12", "uid" => "eb7f560d-60ea-4c44-8abd-afb9b8a4f197"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-08-14T10:01:12Z", "labels" => {"kubernetes.io/metadata.name" => "kube-system"}, "name" => "kube-system", "resourceVersion" => "4", "uid" => "775d0b7a-a3fe-4870-91da-260ed7d8a71e"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"litmus\"}}\n"}, "creationTimestamp" => "2025-09-22T02:48:03Z", "labels" => {"kubernetes.io/metadata.name" => "litmus", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "litmus", "resourceVersion" => "6034447", "uid" => "eb65a094-3035-4215-a707-6a973ac30134"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"local-path-storage\"}}\n"}, "creationTimestamp" => "2025-08-14T10:01:17Z", "labels" => {"kubernetes.io/metadata.name" => "local-path-storage"}, "name" => "local-path-storage", "resourceVersion" => "291", "uid" => "4772814a-342a-45ba-8647-3b9bbce45548"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}] [2025-09-22 02:51:25] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file cluster_tools.yml [2025-09-22 02:51:25] INFO -- CNTI: ClusterTools wait_for_cluster_tools [2025-09-22 02:51:25] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource namespaces [2025-09-22 02:51:26] DEBUG -- CNTI: ClusterTools ensure_namespace_exists namespace_array: [{"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"cnf-default\"}}\n"}, "creationTimestamp" => "2025-09-22T02:47:29Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-default", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-default", "resourceVersion" => "6034283", "uid" => "7b6a0272-d135-41ea-a551-c9cafecf72c9"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"cnf-testsuite\"}}\n"}, "creationTimestamp" => "2025-09-22T02:47:15Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-testsuite", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-testsuite", "resourceVersion" => "6034230", "uid" => "a47765ed-7349-4df3-a8f5-e815875705b3"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-08-14T10:01:12Z", "labels" => {"kubernetes.io/metadata.name" => "default"}, "name" => "default", "resourceVersion" => "20", "uid" => "854ae2dc-15f4-420c-8f58-c250a8f7b1c3"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-08-14T10:01:12Z", "labels" => {"kubernetes.io/metadata.name" => "kube-node-lease"}, "name" => "kube-node-lease", "resourceVersion" => "27", "uid" => "c1130257-aaa7-40b1-8a9a-d4cb9de8c5b9"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-08-14T10:01:12Z", "labels" => {"kubernetes.io/metadata.name" => "kube-public"}, "name" => "kube-public", "resourceVersion" => "12", "uid" => "eb7f560d-60ea-4c44-8abd-afb9b8a4f197"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-08-14T10:01:12Z", "labels" => {"kubernetes.io/metadata.name" => "kube-system"}, "name" => "kube-system", "resourceVersion" => "4", "uid" => "775d0b7a-a3fe-4870-91da-260ed7d8a71e"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"litmus\"}}\n"}, "creationTimestamp" => "2025-09-22T02:48:03Z", "labels" => {"kubernetes.io/metadata.name" => "litmus", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "litmus", "resourceVersion" => "6034447", "uid" => "eb65a094-3035-4215-a707-6a973ac30134"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"local-path-storage\"}}\n"}, "creationTimestamp" => "2025-08-14T10:01:17Z", "labels" => {"kubernetes.io/metadata.name" => "local-path-storage"}, "name" => "local-path-storage", "resourceVersion" => "291", "uid" => "4772814a-342a-45ba-8647-3b9bbce45548"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}] [2025-09-22 02:51:26] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Waiting for resource Daemonset/cluster-tools to install [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-09-22 02:51:26] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Daemonset/cluster-tools is ready [2025-09-22 02:51:26] INFO -- CNTI-Setup.install_cluster_tools: cluster_tools has been installed on the cluster [2025-09-22 02:51:26] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:26] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:51:26] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:26] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:26] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:51:26] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:51:26] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:51:26] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:26] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [specialized_init_system] [2025-09-22 02:51:26] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:26] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:26] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:51:26] INFO -- CNTI-CNFManager.Task.task_runner.specialized_init_system: Starting test [2025-09-22 02:51:26] INFO -- CNTI-CNFManager.workload_resource_test: Starting test [2025-09-22 02:51:26] INFO -- CNTI-CNFManager.resource_refs: Yielding resources: ["replicaset", "deployment", "statefulset", "pod", "daemonset"] [2025-09-22 02:51:26] DEBUG -- CNTI-CNFManager.cnf_resources: Map block to CNF resources [2025-09-22 02:51:26] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:51:26] DEBUG -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:26] INFO -- CNTI-specialized_init_system: Checking Deployment/coredns-coredns in cnf-default [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.Get.pods_by_resource_labels: Creating list of pods by resource: Deployment/coredns-coredns labels [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.Get.pods_by_labels: Creating list of pods that have labels: {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"} [2025-09-22 02:51:26] INFO -- CNTI-KubectlClient.Get.pods_by_labels: Matched 1 pods: coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:51:26] INFO -- CNTI-specialized_init_system: Inspecting pod coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.Get.nodes_by_pod: Finding nodes with pod/coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-09-22 02:51:26] INFO -- CNTI-KubectlClient.Get.nodes_by_pod: Nodes with pod/coredns-coredns-64fc886fd4-5bf6l list: latest-worker2 [2025-09-22 02:51:26] INFO -- CNTI: parse_container_id container_id: containerd://fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9 [2025-09-22 02:51:26] INFO -- CNTI: node_pid_by_container_id container_id: fd5d8bcdd87d1c [2025-09-22 02:51:26] INFO -- CNTI: parse_container_id container_id: fd5d8bcdd87d1c [2025-09-22 02:51:26] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:27] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:27] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:27] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:51:27] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: time="2025-09-22T02:51:27Z" level=warning msg="Config \"/etc/crictl.yaml\" does not exist, trying next: \"/usr/local/bin/crictl.yaml\"" time="2025-09-22T02:51:27Z" level=warning msg="runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead." [2025-09-22 02:51:27] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "{\n \"info\": {\n \"config\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"envs\": [\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT_HTTPS\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_UDP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP\",\n \"value\": \"udp://10.96.234.94:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP\",\n \"value\": \"tcp://10.96.234.94:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_ADDR\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_TCP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT\",\n \"value\": \"udp://10.96.234.94:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_ADDR\",\n \"value\": \"10.96.234.94\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_ADDR\",\n \"value\": \"10.96.234.94\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_HOST\",\n \"value\": \"10.96.234.94\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PROTO\",\n \"value\": \"udp\"\n }\n ],\n \"image\": {\n \"image\": \"sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f\",\n \"user_specified_image\": \"coredns/coredns:1.7.1\"\n },\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-5bf6l\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"0412b191-f9bf-4fc1-904b-bf7034dffe7e\"\n },\n \"linux\": {\n \"resources\": {\n \"cpu_period\": 100000,\n \"cpu_quota\": 10000,\n \"cpu_shares\": 102,\n \"hugepage_limits\": [\n {\n \"page_size\": \"2MB\"\n },\n {\n \"page_size\": \"1GB\"\n }\n ],\n \"memory_limit_in_bytes\": 134217728,\n \"memory_swap_limit_in_bytes\": 134217728,\n \"oom_score_adj\": -997\n },\n \"security_context\": {\n \"masked_paths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespace_options\": {\n \"pid\": 1,\n \"userns_options\": {\n \"mode\": 2\n }\n },\n \"readonly_paths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"run_as_user\": {},\n \"seccomp\": {\n \"profile_type\": 1\n }\n }\n },\n \"log_path\": \"coredns/0.log\",\n \"metadata\": {\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"container_path\": \"/etc/coredns\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/etc/hosts\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts\"\n },\n {\n \"container_path\": \"/dev/termination-log\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d\"\n }\n ]\n },\n \"pid\": 3536121,\n \"removing\": false,\n \"runtimeOptions\": {\n \"systemd_cgroup\": true\n },\n \"runtimeSpec\": {\n \"annotations\": {\n \"io.kubernetes.cri.container-name\": \"coredns\",\n \"io.kubernetes.cri.container-type\": \"container\",\n \"io.kubernetes.cri.image-name\": \"coredns/coredns:1.7.1\",\n \"io.kubernetes.cri.sandbox-id\": \"240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e\",\n \"io.kubernetes.cri.sandbox-name\": \"coredns-coredns-64fc886fd4-5bf6l\",\n \"io.kubernetes.cri.sandbox-namespace\": \"cnf-default\",\n \"io.kubernetes.cri.sandbox-uid\": \"0412b191-f9bf-4fc1-904b-bf7034dffe7e\"\n },\n \"hooks\": {\n \"createContainer\": [\n {\n \"path\": \"/kind/bin/mount-product-files.sh\"\n }\n ]\n },\n \"linux\": {\n \"cgroupsPath\": \"kubelet-kubepods-pod0412b191_f9bf_4fc1_904b_bf7034dffe7e.slice:cri-containerd:fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9\",\n \"maskedPaths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespaces\": [\n {\n \"type\": \"pid\"\n },\n {\n \"path\": \"/proc/3536096/ns/ipc\",\n \"type\": \"ipc\"\n },\n {\n \"path\": \"/proc/3536096/ns/uts\",\n \"type\": \"uts\"\n },\n {\n \"type\": \"mount\"\n },\n {\n \"path\": \"/proc/3536096/ns/net\",\n \"type\": \"network\"\n }\n ],\n \"readonlyPaths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"resources\": {\n \"cpu\": {\n \"period\": 100000,\n \"quota\": 10000,\n \"shares\": 102\n },\n \"devices\": [\n {\n \"access\": \"rwm\",\n \"allow\": false\n }\n ],\n \"memory\": {\n \"limit\": 134217728,\n \"swap\": 134217728\n }\n }\n },\n \"mounts\": [\n {\n \"destination\": \"/proc\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"proc\",\n \"type\": \"proc\"\n },\n {\n \"destination\": \"/dev\",\n \"options\": [\n \"nosuid\",\n \"strictatime\",\n \"mode=755\",\n \"size=65536k\"\n ],\n \"source\": \"tmpfs\",\n \"type\": \"tmpfs\"\n },\n {\n \"destination\": \"/dev/pts\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"newinstance\",\n \"ptmxmode=0666\",\n \"mode=0620\",\n \"gid=5\"\n ],\n \"source\": \"devpts\",\n \"type\": \"devpts\"\n },\n {\n \"destination\": \"/dev/mqueue\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"mqueue\",\n \"type\": \"mqueue\"\n },\n {\n \"destination\": \"/sys\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"ro\"\n ],\n \"source\": \"sysfs\",\n \"type\": \"sysfs\"\n },\n {\n \"destination\": \"/sys/fs/cgroup\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"relatime\",\n \"ro\"\n ],\n \"source\": \"cgroup\",\n \"type\": \"cgroup\"\n },\n {\n \"destination\": \"/etc/coredns\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hosts\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/termination-log\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hostname\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/hostname\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/resolv.conf\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/resolv.conf\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/shm\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/run/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/shm\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw\",\n \"type\": \"bind\"\n }\n ],\n \"ociVersion\": \"1.2.1\",\n \"process\": {\n \"args\": [\n \"/coredns\",\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"capabilities\": {\n \"bounding\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"effective\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"permitted\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ]\n },\n \"cwd\": \"/\",\n \"env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\n \"HOSTNAME=coredns-coredns-64fc886fd4-5bf6l\",\n \"COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp\",\n \"KUBERNETES_SERVICE_PORT_HTTPS=443\",\n \"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\",\n \"COREDNS_COREDNS_SERVICE_PORT=53\",\n \"COREDNS_COREDNS_SERVICE_PORT_UDP_53=53\",\n \"COREDNS_COREDNS_PORT_53_UDP=udp://10.96.234.94:53\",\n \"COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.234.94:53\",\n \"COREDNS_COREDNS_PORT_53_TCP_PORT=53\",\n \"KUBERNETES_SERVICE_HOST=10.96.0.1\",\n \"KUBERNETES_SERVICE_PORT=443\",\n \"KUBERNETES_PORT=tcp://10.96.0.1:443\",\n \"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\",\n \"COREDNS_COREDNS_SERVICE_PORT_TCP_53=53\",\n \"COREDNS_COREDNS_PORT=udp://10.96.234.94:53\",\n \"COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.234.94\",\n \"COREDNS_COREDNS_PORT_53_UDP_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.234.94\",\n \"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\n \"KUBERNETES_PORT_443_TCP_PORT=443\",\n \"COREDNS_COREDNS_SERVICE_HOST=10.96.234.94\",\n \"COREDNS_COREDNS_PORT_53_UDP_PROTO=udp\"\n ],\n \"oomScoreAdj\": -997,\n \"user\": {\n \"additionalGids\": [\n 0\n ],\n \"gid\": 0,\n \"uid\": 0\n }\n },\n \"root\": {\n \"path\": \"rootfs\"\n }\n },\n \"runtimeType\": \"io.containerd.runc.v2\",\n \"sandboxID\": \"240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e\",\n \"snapshotKey\": \"fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9\",\n \"snapshotter\": \"overlayfs\"\n },\n \"status\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"createdAt\": \"2025-09-22T02:47:32.04505971Z\",\n \"exitCode\": 0,\n \"finishedAt\": \"0001-01-01T00:00:00Z\",\n \"id\": \"fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9\",\n \"image\": {\n \"annotations\": {},\n \"image\": \"docker.io/coredns/coredns:1.7.1\",\n \"runtimeHandler\": \"\",\n \"userSpecifiedImage\": \"\"\n },\n \"imageId\": \"\",\n \"imageRef\": \"docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef\",\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-5bf6l\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"0412b191-f9bf-4fc1-904b-bf7034dffe7e\"\n },\n \"logPath\": \"/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-5bf6l_0412b191-f9bf-4fc1-904b-bf7034dffe7e/coredns/0.log\",\n \"message\": \"\",\n \"metadata\": {\n \"attempt\": 0,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"containerPath\": \"/etc/coredns\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/etc/hosts\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/dev/termination-log\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n }\n ],\n \"reason\": \"\",\n \"resources\": {\n \"linux\": {\n \"cpuPeriod\": \"100000\",\n \"cpuQuota\": \"10000\",\n \"cpuShares\": \"102\",\n \"cpusetCpus\": \"\",\n \"cpusetMems\": \"\",\n \"hugepageLimits\": [],\n \"memoryLimitInBytes\": \"134217728\",\n \"memorySwapLimitInBytes\": \"134217728\",\n \"oomScoreAdj\": \"-997\",\n \"unified\": {}\n }\n },\n \"startedAt\": \"2025-09-22T02:47:33.663347876Z\",\n \"state\": \"CONTAINER_RUNNING\",\n \"user\": {\n \"linux\": {\n \"gid\": \"0\",\n \"supplementalGroups\": [\n \"0\"\n ],\n \"uid\": \"0\"\n }\n }\n }\n}\n", error: "time=\"2025-09-22T02:51:27Z\" level=warning msg=\"Config \\\"/etc/crictl.yaml\\\" does not exist, trying next: \\\"/usr/local/bin/crictl.yaml\\\"\"\ntime=\"2025-09-22T02:51:27Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\n"} [2025-09-22 02:51:27] DEBUG -- CNTI: node_pid_by_container_id inspect: { "info": { "config": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "args": [ "-conf", "/etc/coredns/Corefile" ], "envs": [ { "key": "COREDNS_COREDNS_PORT_53_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_SERVICE_PORT_HTTPS", "value": "443" }, { "key": "KUBERNETES_PORT_443_TCP", "value": "tcp://10.96.0.1:443" }, { "key": "COREDNS_COREDNS_SERVICE_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_UDP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP", "value": "udp://10.96.234.94:53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP", "value": "tcp://10.96.234.94:53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PORT", "value": "53" }, { "key": "KUBERNETES_SERVICE_HOST", "value": "10.96.0.1" }, { "key": "KUBERNETES_SERVICE_PORT", "value": "443" }, { "key": "KUBERNETES_PORT", "value": "tcp://10.96.0.1:443" }, { "key": "KUBERNETES_PORT_443_TCP_ADDR", "value": "10.96.0.1" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_TCP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT", "value": "udp://10.96.234.94:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_ADDR", "value": "10.96.234.94" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_ADDR", "value": "10.96.234.94" }, { "key": "KUBERNETES_PORT_443_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_PORT_443_TCP_PORT", "value": "443" }, { "key": "COREDNS_COREDNS_SERVICE_HOST", "value": "10.96.234.94" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PROTO", "value": "udp" } ], "image": { "image": "sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f", "user_specified_image": "coredns/coredns:1.7.1" }, "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-5bf6l", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "0412b191-f9bf-4fc1-904b-bf7034dffe7e" }, "linux": { "resources": { "cpu_period": 100000, "cpu_quota": 10000, "cpu_shares": 102, "hugepage_limits": [ { "page_size": "2MB" }, { "page_size": "1GB" } ], "memory_limit_in_bytes": 134217728, "memory_swap_limit_in_bytes": 134217728, "oom_score_adj": -997 }, "security_context": { "masked_paths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespace_options": { "pid": 1, "userns_options": { "mode": 2 } }, "readonly_paths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "run_as_user": {}, "seccomp": { "profile_type": 1 } } }, "log_path": "coredns/0.log", "metadata": { "name": "coredns" }, "mounts": [ { "container_path": "/etc/coredns", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume", "readonly": true }, { "container_path": "/var/run/secrets/kubernetes.io/serviceaccount", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw", "readonly": true }, { "container_path": "/etc/hosts", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts" }, { "container_path": "/dev/termination-log", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d" } ] }, "pid": 3536121, "removing": false, "runtimeOptions": { "systemd_cgroup": true }, "runtimeSpec": { "annotations": { "io.kubernetes.cri.container-name": "coredns", "io.kubernetes.cri.container-type": "container", "io.kubernetes.cri.image-name": "coredns/coredns:1.7.1", "io.kubernetes.cri.sandbox-id": "240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e", "io.kubernetes.cri.sandbox-name": "coredns-coredns-64fc886fd4-5bf6l", "io.kubernetes.cri.sandbox-namespace": "cnf-default", "io.kubernetes.cri.sandbox-uid": "0412b191-f9bf-4fc1-904b-bf7034dffe7e" }, "hooks": { "createContainer": [ { "path": "/kind/bin/mount-product-files.sh" } ] }, "linux": { "cgroupsPath": "kubelet-kubepods-pod0412b191_f9bf_4fc1_904b_bf7034dffe7e.slice:cri-containerd:fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9", "maskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespaces": [ { "type": "pid" }, { "path": "/proc/3536096/ns/ipc", "type": "ipc" }, { "path": "/proc/3536096/ns/uts", "type": "uts" }, { "type": "mount" }, { "path": "/proc/3536096/ns/net", "type": "network" } ], "readonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "resources": { "cpu": { "period": 100000, "quota": 10000, "shares": 102 }, "devices": [ { "access": "rwm", "allow": false } ], "memory": { "limit": 134217728, "swap": 134217728 } } }, "mounts": [ { "destination": "/proc", "options": [ "nosuid", "noexec", "nodev" ], "source": "proc", "type": "proc" }, { "destination": "/dev", "options": [ "nosuid", "strictatime", "mode=755", "size=65536k" ], "source": "tmpfs", "type": "tmpfs" }, { "destination": "/dev/pts", "options": [ "nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5" ], "source": "devpts", "type": "devpts" }, { "destination": "/dev/mqueue", "options": [ "nosuid", "noexec", "nodev" ], "source": "mqueue", "type": "mqueue" }, { "destination": "/sys", "options": [ "nosuid", "noexec", "nodev", "ro" ], "source": "sysfs", "type": "sysfs" }, { "destination": "/sys/fs/cgroup", "options": [ "nosuid", "noexec", "nodev", "relatime", "ro" ], "source": "cgroup", "type": "cgroup" }, { "destination": "/etc/coredns", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume", "type": "bind" }, { "destination": "/etc/hosts", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts", "type": "bind" }, { "destination": "/dev/termination-log", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d", "type": "bind" }, { "destination": "/etc/hostname", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/hostname", "type": "bind" }, { "destination": "/etc/resolv.conf", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/resolv.conf", "type": "bind" }, { "destination": "/dev/shm", "options": [ "rbind", "rprivate", "rw" ], "source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/shm", "type": "bind" }, { "destination": "/var/run/secrets/kubernetes.io/serviceaccount", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw", "type": "bind" } ], "ociVersion": "1.2.1", "process": { "args": [ "/coredns", "-conf", "/etc/coredns/Corefile" ], "capabilities": { "bounding": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "effective": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "permitted": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ] }, "cwd": "/", "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME=coredns-coredns-64fc886fd4-5bf6l", "COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "COREDNS_COREDNS_SERVICE_PORT=53", "COREDNS_COREDNS_SERVICE_PORT_UDP_53=53", "COREDNS_COREDNS_PORT_53_UDP=udp://10.96.234.94:53", "COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.234.94:53", "COREDNS_COREDNS_PORT_53_TCP_PORT=53", "KUBERNETES_SERVICE_HOST=10.96.0.1", "KUBERNETES_SERVICE_PORT=443", "KUBERNETES_PORT=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "COREDNS_COREDNS_SERVICE_PORT_TCP_53=53", "COREDNS_COREDNS_PORT=udp://10.96.234.94:53", "COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.234.94", "COREDNS_COREDNS_PORT_53_UDP_PORT=53", "COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.234.94", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "KUBERNETES_PORT_443_TCP_PORT=443", "COREDNS_COREDNS_SERVICE_HOST=10.96.234.94", "COREDNS_COREDNS_PORT_53_UDP_PROTO=udp" ], "oomScoreAdj": -997, "user": { "additionalGids": [ 0 ], "gid": 0, "uid": 0 } }, "root": { "path": "rootfs" } }, "runtimeType": "io.containerd.runc.v2", "sandboxID": "240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e", "snapshotKey": "fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9", "snapshotter": "overlayfs" }, "status": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "createdAt": "2025-09-22T02:47:32.04505971Z", "exitCode": 0, "finishedAt": "0001-01-01T00:00:00Z", "id": "fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9", "image": { "annotations": {}, "image": "docker.io/coredns/coredns:1.7.1", "runtimeHandler": "", "userSpecifiedImage": "" }, "imageId": "", "imageRef": "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-5bf6l", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "0412b191-f9bf-4fc1-904b-bf7034dffe7e" }, "logPath": "/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-5bf6l_0412b191-f9bf-4fc1-904b-bf7034dffe7e/coredns/0.log", "message": "", "metadata": { "attempt": 0, "name": "coredns" }, "mounts": [ { "containerPath": "/etc/coredns", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/var/run/secrets/kubernetes.io/serviceaccount", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/etc/hosts", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/dev/termination-log", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] } ], "reason": "", "resources": { "linux": { "cpuPeriod": "100000", "cpuQuota": "10000", "cpuShares": "102", "cpusetCpus": "", "cpusetMems": "", "hugepageLimits": [], "memoryLimitInBytes": "134217728", "memorySwapLimitInBytes": "134217728", "oomScoreAdj": "-997", "unified": {} } }, "startedAt": "2025-09-22T02:47:33.663347876Z", "state": "CONTAINER_RUNNING", "user": { "linux": { "gid": "0", "supplementalGroups": [ "0" ], "uid": "0" } } } } [2025-09-22 02:51:27] INFO -- CNTI: node_pid_by_container_id pid: 3536121 [2025-09-22 02:51:27] INFO -- CNTI: cmdline_by_pid [2025-09-22 02:51:27] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:27] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:27] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:27] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:27] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg pod/coredns-coredns-64fc886fd4-5bf6l container 'coredns' uses non-specialized init '/coredns' ✖️ 🏆FAILED: [specialized_init_system] Containers do not use specialized init systems (ভ_ভ) ރ 🚀 [2025-09-22 02:51:27] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-09-22 02:51:27] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-09-22 02:51:27] INFO -- CNTI-InitSystems.scan: pod/coredns-coredns-64fc886fd4-5bf6l has container 'coredns' with /coredns as init process [2025-09-22 02:51:27] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: false [2025-09-22 02:51:27] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'specialized_init_system' emoji: 🚀 [2025-09-22 02:51:27] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'specialized_init_system' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:27] DEBUG -- CNTI-CNFManager.Points: Task: 'specialized_init_system' type: essential [2025-09-22 02:51:27] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 0 points [2025-09-22 02:51:27] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'specialized_init_system' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:27] DEBUG -- CNTI-CNFManager.Points: Task: 'specialized_init_system' type: essential [2025-09-22 02:51:27] DEBUG -- CNTI-CNFManager.Points.upsert_task-specialized_init_system: Task start time: 2025-09-22 02:51:26 UTC, end time: 2025-09-22 02:51:27 UTC [2025-09-22 02:51:27] INFO -- CNTI-CNFManager.Points.upsert_task-specialized_init_system: Task: 'specialized_init_system' has status: 'failed' and is awarded: 0 points.Runtime: 00:00:01.318303709 [2025-09-22 02:51:27] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:27] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:51:27] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:27] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:27] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:51:27] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:51:27] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:51:27] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:27] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [single_process_type] [2025-09-22 02:51:27] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:27] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:27] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:51:27] INFO -- CNTI-CNFManager.Task.task_runner.single_process_type: Starting test [2025-09-22 02:51:27] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-09-22 02:51:27] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:51:27] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-09-22 02:51:27] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:27] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-09-22 02:51:27] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:27] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-09-22 02:51:27] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:27] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-09-22 02:51:27] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:27] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-09-22 02:51:27] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:27] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-09-22 02:51:27] INFO -- CNTI: Constructed resource_named_tuple: {kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"} [2025-09-22 02:51:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:27] DEBUG -- CNTI-KubectlClient.Get.pods_by_resource_labels: Creating list of pods by resource: Deployment/coredns-coredns labels [2025-09-22 02:51:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:27] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-09-22 02:51:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:27] DEBUG -- CNTI-KubectlClient.Get.pods_by_labels: Creating list of pods that have labels: {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"} [2025-09-22 02:51:27] INFO -- CNTI-KubectlClient.Get.pods_by_labels: Matched 1 pods: coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:51:27] INFO -- CNTI: pod_name: coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:51:27] INFO -- CNTI: container_statuses: [{"allocatedResources" => {"cpu" => "100m", "memory" => "128Mi"}, "containerID" => "containerd://fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9", "image" => "docker.io/coredns/coredns:1.7.1", "imageID" => "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "lastState" => {}, "name" => "coredns", "ready" => true, "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "restartCount" => 0, "started" => true, "state" => {"running" => {"startedAt" => "2025-09-22T02:47:33Z"}}, "user" => {"linux" => {"gid" => 0, "supplementalGroups" => [0], "uid" => 0}}, "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}, {"mountPath" => "/var/run/secrets/kubernetes.io/serviceaccount", "name" => "kube-api-access-ls7sw", "readOnly" => true, "recursiveReadOnly" => "Disabled"}]}] [2025-09-22 02:51:27] INFO -- CNTI: pod_name: coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:51:27] DEBUG -- CNTI-KubectlClient.Get.nodes_by_pod: Finding nodes with pod/coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:51:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-09-22 02:51:28] INFO -- CNTI-KubectlClient.Get.nodes_by_pod: Nodes with pod/coredns-coredns-64fc886fd4-5bf6l list: latest-worker2 [2025-09-22 02:51:28] INFO -- CNTI: nodes_by_resource done [2025-09-22 02:51:28] INFO -- CNTI: before ready containerStatuses container_id fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9 [2025-09-22 02:51:28] INFO -- CNTI: containerStatuses container_id fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9 [2025-09-22 02:51:28] INFO -- CNTI: node_pid_by_container_id container_id: fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9 [2025-09-22 02:51:28] INFO -- CNTI: parse_container_id container_id: fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9 [2025-09-22 02:51:28] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:28] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:28] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:28] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:28] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:28] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:51:28] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: time="2025-09-22T02:51:28Z" level=warning msg="Config \"/etc/crictl.yaml\" does not exist, trying next: \"/usr/local/bin/crictl.yaml\"" time="2025-09-22T02:51:28Z" level=warning msg="runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead." [2025-09-22 02:51:28] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "{\n \"info\": {\n \"config\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"envs\": [\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT_HTTPS\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_UDP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP\",\n \"value\": \"udp://10.96.234.94:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP\",\n \"value\": \"tcp://10.96.234.94:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_ADDR\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_TCP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT\",\n \"value\": \"udp://10.96.234.94:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_ADDR\",\n \"value\": \"10.96.234.94\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_ADDR\",\n \"value\": \"10.96.234.94\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_HOST\",\n \"value\": \"10.96.234.94\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PROTO\",\n \"value\": \"udp\"\n }\n ],\n \"image\": {\n \"image\": \"sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f\",\n \"user_specified_image\": \"coredns/coredns:1.7.1\"\n },\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-5bf6l\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"0412b191-f9bf-4fc1-904b-bf7034dffe7e\"\n },\n \"linux\": {\n \"resources\": {\n \"cpu_period\": 100000,\n \"cpu_quota\": 10000,\n \"cpu_shares\": 102,\n \"hugepage_limits\": [\n {\n \"page_size\": \"2MB\"\n },\n {\n \"page_size\": \"1GB\"\n }\n ],\n \"memory_limit_in_bytes\": 134217728,\n \"memory_swap_limit_in_bytes\": 134217728,\n \"oom_score_adj\": -997\n },\n \"security_context\": {\n \"masked_paths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespace_options\": {\n \"pid\": 1,\n \"userns_options\": {\n \"mode\": 2\n }\n },\n \"readonly_paths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"run_as_user\": {},\n \"seccomp\": {\n \"profile_type\": 1\n }\n }\n },\n \"log_path\": \"coredns/0.log\",\n \"metadata\": {\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"container_path\": \"/etc/coredns\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/etc/hosts\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts\"\n },\n {\n \"container_path\": \"/dev/termination-log\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d\"\n }\n ]\n },\n \"pid\": 3536121,\n \"removing\": false,\n \"runtimeOptions\": {\n \"systemd_cgroup\": true\n },\n \"runtimeSpec\": {\n \"annotations\": {\n \"io.kubernetes.cri.container-name\": \"coredns\",\n \"io.kubernetes.cri.container-type\": \"container\",\n \"io.kubernetes.cri.image-name\": \"coredns/coredns:1.7.1\",\n \"io.kubernetes.cri.sandbox-id\": \"240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e\",\n \"io.kubernetes.cri.sandbox-name\": \"coredns-coredns-64fc886fd4-5bf6l\",\n \"io.kubernetes.cri.sandbox-namespace\": \"cnf-default\",\n \"io.kubernetes.cri.sandbox-uid\": \"0412b191-f9bf-4fc1-904b-bf7034dffe7e\"\n },\n \"hooks\": {\n \"createContainer\": [\n {\n \"path\": \"/kind/bin/mount-product-files.sh\"\n }\n ]\n },\n \"linux\": {\n \"cgroupsPath\": \"kubelet-kubepods-pod0412b191_f9bf_4fc1_904b_bf7034dffe7e.slice:cri-containerd:fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9\",\n \"maskedPaths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespaces\": [\n {\n \"type\": \"pid\"\n },\n {\n \"path\": \"/proc/3536096/ns/ipc\",\n \"type\": \"ipc\"\n },\n {\n \"path\": \"/proc/3536096/ns/uts\",\n \"type\": \"uts\"\n },\n {\n \"type\": \"mount\"\n },\n {\n \"path\": \"/proc/3536096/ns/net\",\n \"type\": \"network\"\n }\n ],\n \"readonlyPaths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"resources\": {\n \"cpu\": {\n \"period\": 100000,\n \"quota\": 10000,\n \"shares\": 102\n },\n \"devices\": [\n {\n \"access\": \"rwm\",\n \"allow\": false\n }\n ],\n \"memory\": {\n \"limit\": 134217728,\n \"swap\": 134217728\n }\n }\n },\n \"mounts\": [\n {\n \"destination\": \"/proc\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"proc\",\n \"type\": \"proc\"\n },\n {\n \"destination\": \"/dev\",\n \"options\": [\n \"nosuid\",\n \"strictatime\",\n \"mode=755\",\n \"size=65536k\"\n ],\n \"source\": \"tmpfs\",\n \"type\": \"tmpfs\"\n },\n {\n \"destination\": \"/dev/pts\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"newinstance\",\n \"ptmxmode=0666\",\n \"mode=0620\",\n \"gid=5\"\n ],\n \"source\": \"devpts\",\n \"type\": \"devpts\"\n },\n {\n \"destination\": \"/dev/mqueue\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"mqueue\",\n \"type\": \"mqueue\"\n },\n {\n \"destination\": \"/sys\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"ro\"\n ],\n \"source\": \"sysfs\",\n \"type\": \"sysfs\"\n },\n {\n \"destination\": \"/sys/fs/cgroup\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"relatime\",\n \"ro\"\n ],\n \"source\": \"cgroup\",\n \"type\": \"cgroup\"\n },\n {\n \"destination\": \"/etc/coredns\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hosts\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/termination-log\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hostname\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/hostname\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/resolv.conf\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/resolv.conf\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/shm\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/run/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/shm\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw\",\n \"type\": \"bind\"\n }\n ],\n \"ociVersion\": \"1.2.1\",\n \"process\": {\n \"args\": [\n \"/coredns\",\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"capabilities\": {\n \"bounding\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"effective\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"permitted\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ]\n },\n \"cwd\": \"/\",\n \"env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\n \"HOSTNAME=coredns-coredns-64fc886fd4-5bf6l\",\n \"COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp\",\n \"KUBERNETES_SERVICE_PORT_HTTPS=443\",\n \"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\",\n \"COREDNS_COREDNS_SERVICE_PORT=53\",\n \"COREDNS_COREDNS_SERVICE_PORT_UDP_53=53\",\n \"COREDNS_COREDNS_PORT_53_UDP=udp://10.96.234.94:53\",\n \"COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.234.94:53\",\n \"COREDNS_COREDNS_PORT_53_TCP_PORT=53\",\n \"KUBERNETES_SERVICE_HOST=10.96.0.1\",\n \"KUBERNETES_SERVICE_PORT=443\",\n \"KUBERNETES_PORT=tcp://10.96.0.1:443\",\n \"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\",\n \"COREDNS_COREDNS_SERVICE_PORT_TCP_53=53\",\n \"COREDNS_COREDNS_PORT=udp://10.96.234.94:53\",\n \"COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.234.94\",\n \"COREDNS_COREDNS_PORT_53_UDP_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.234.94\",\n \"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\n \"KUBERNETES_PORT_443_TCP_PORT=443\",\n \"COREDNS_COREDNS_SERVICE_HOST=10.96.234.94\",\n \"COREDNS_COREDNS_PORT_53_UDP_PROTO=udp\"\n ],\n \"oomScoreAdj\": -997,\n \"user\": {\n \"additionalGids\": [\n 0\n ],\n \"gid\": 0,\n \"uid\": 0\n }\n },\n \"root\": {\n \"path\": \"rootfs\"\n }\n },\n \"runtimeType\": \"io.containerd.runc.v2\",\n \"sandboxID\": \"240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e\",\n \"snapshotKey\": \"fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9\",\n \"snapshotter\": \"overlayfs\"\n },\n \"status\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"createdAt\": \"2025-09-22T02:47:32.04505971Z\",\n \"exitCode\": 0,\n \"finishedAt\": \"0001-01-01T00:00:00Z\",\n \"id\": \"fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9\",\n \"image\": {\n \"annotations\": {},\n \"image\": \"docker.io/coredns/coredns:1.7.1\",\n \"runtimeHandler\": \"\",\n \"userSpecifiedImage\": \"\"\n },\n \"imageId\": \"\",\n \"imageRef\": \"docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef\",\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-5bf6l\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"0412b191-f9bf-4fc1-904b-bf7034dffe7e\"\n },\n \"logPath\": \"/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-5bf6l_0412b191-f9bf-4fc1-904b-bf7034dffe7e/coredns/0.log\",\n \"message\": \"\",\n \"metadata\": {\n \"attempt\": 0,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"containerPath\": \"/etc/coredns\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/etc/hosts\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/dev/termination-log\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n }\n ],\n \"reason\": \"\",\n \"resources\": {\n \"linux\": {\n \"cpuPeriod\": \"100000\",\n \"cpuQuota\": \"10000\",\n \"cpuShares\": \"102\",\n \"cpusetCpus\": \"\",\n \"cpusetMems\": \"\",\n \"hugepageLimits\": [],\n \"memoryLimitInBytes\": \"134217728\",\n \"memorySwapLimitInBytes\": \"134217728\",\n \"oomScoreAdj\": \"-997\",\n \"unified\": {}\n }\n },\n \"startedAt\": \"2025-09-22T02:47:33.663347876Z\",\n \"state\": \"CONTAINER_RUNNING\",\n \"user\": {\n \"linux\": {\n \"gid\": \"0\",\n \"supplementalGroups\": [\n \"0\"\n ],\n \"uid\": \"0\"\n }\n }\n }\n}\n", error: "time=\"2025-09-22T02:51:28Z\" level=warning msg=\"Config \\\"/etc/crictl.yaml\\\" does not exist, trying next: \\\"/usr/local/bin/crictl.yaml\\\"\"\ntime=\"2025-09-22T02:51:28Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\n"} [2025-09-22 02:51:28] DEBUG -- CNTI: node_pid_by_container_id inspect: { "info": { "config": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "args": [ "-conf", "/etc/coredns/Corefile" ], "envs": [ { "key": "COREDNS_COREDNS_PORT_53_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_SERVICE_PORT_HTTPS", "value": "443" }, { "key": "KUBERNETES_PORT_443_TCP", "value": "tcp://10.96.0.1:443" }, { "key": "COREDNS_COREDNS_SERVICE_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_UDP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP", "value": "udp://10.96.234.94:53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP", "value": "tcp://10.96.234.94:53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PORT", "value": "53" }, { "key": "KUBERNETES_SERVICE_HOST", "value": "10.96.0.1" }, { "key": "KUBERNETES_SERVICE_PORT", "value": "443" }, { "key": "KUBERNETES_PORT", "value": "tcp://10.96.0.1:443" }, { "key": "KUBERNETES_PORT_443_TCP_ADDR", "value": "10.96.0.1" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_TCP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT", "value": "udp://10.96.234.94:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_ADDR", "value": "10.96.234.94" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_ADDR", "value": "10.96.234.94" }, { "key": "KUBERNETES_PORT_443_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_PORT_443_TCP_PORT", "value": "443" }, { "key": "COREDNS_COREDNS_SERVICE_HOST", "value": "10.96.234.94" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PROTO", "value": "udp" } ], "image": { "image": "sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f", "user_specified_image": "coredns/coredns:1.7.1" }, "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-5bf6l", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "0412b191-f9bf-4fc1-904b-bf7034dffe7e" }, "linux": { "resources": { "cpu_period": 100000, "cpu_quota": 10000, "cpu_shares": 102, "hugepage_limits": [ { "page_size": "2MB" }, { "page_size": "1GB" } ], "memory_limit_in_bytes": 134217728, "memory_swap_limit_in_bytes": 134217728, "oom_score_adj": -997 }, "security_context": { "masked_paths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespace_options": { "pid": 1, "userns_options": { "mode": 2 } }, "readonly_paths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "run_as_user": {}, "seccomp": { "profile_type": 1 } } }, "log_path": "coredns/0.log", "metadata": { "name": "coredns" }, "mounts": [ { "container_path": "/etc/coredns", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume", "readonly": true }, { "container_path": "/var/run/secrets/kubernetes.io/serviceaccount", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw", "readonly": true }, { "container_path": "/etc/hosts", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts" }, { "container_path": "/dev/termination-log", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d" } ] }, "pid": 3536121, "removing": false, "runtimeOptions": { "systemd_cgroup": true }, "runtimeSpec": { "annotations": { "io.kubernetes.cri.container-name": "coredns", "io.kubernetes.cri.container-type": "container", "io.kubernetes.cri.image-name": "coredns/coredns:1.7.1", "io.kubernetes.cri.sandbox-id": "240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e", "io.kubernetes.cri.sandbox-name": "coredns-coredns-64fc886fd4-5bf6l", "io.kubernetes.cri.sandbox-namespace": "cnf-default", "io.kubernetes.cri.sandbox-uid": "0412b191-f9bf-4fc1-904b-bf7034dffe7e" }, "hooks": { "createContainer": [ { "path": "/kind/bin/mount-product-files.sh" } ] }, "linux": { "cgroupsPath": "kubelet-kubepods-pod0412b191_f9bf_4fc1_904b_bf7034dffe7e.slice:cri-containerd:fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9", "maskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespaces": [ { "type": "pid" }, { "path": "/proc/3536096/ns/ipc", "type": "ipc" }, { "path": "/proc/3536096/ns/uts", "type": "uts" }, { "type": "mount" }, { "path": "/proc/3536096/ns/net", "type": "network" } ], "readonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "resources": { "cpu": { "period": 100000, "quota": 10000, "shares": 102 }, "devices": [ { "access": "rwm", "allow": false } ], "memory": { "limit": 134217728, "swap": 134217728 } } }, "mounts": [ { "destination": "/proc", "options": [ "nosuid", "noexec", "nodev" ], "source": "proc", "type": "proc" }, { "destination": "/dev", "options": [ "nosuid", "strictatime", "mode=755", "size=65536k" ], "source": "tmpfs", "type": "tmpfs" }, { "destination": "/dev/pts", "options": [ "nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5" ], "source": "devpts", "type": "devpts" }, { "destination": "/dev/mqueue", "options": [ "nosuid", "noexec", "nodev" ], "source": "mqueue", "type": "mqueue" }, { "destination": "/sys", "options": [ "nosuid", "noexec", "nodev", "ro" ], "source": "sysfs", "type": "sysfs" }, { "destination": "/sys/fs/cgroup", "options": [ "nosuid", "noexec", "nodev", "relatime", "ro" ], "source": "cgroup", "type": "cgroup" }, { "destination": "/etc/coredns", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume", "type": "bind" }, { "destination": "/etc/hosts", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts", "type": "bind" }, { "destination": "/dev/termination-log", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d", "type": "bind" }, { "destination": "/etc/hostname", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/hostname", "type": "bind" }, { "destination": "/etc/resolv.conf", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/resolv.conf", "type": "bind" }, { "destination": "/dev/shm", "options": [ "rbind", "rprivate", "rw" ], "source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/shm", "type": "bind" }, { "destination": "/var/run/secrets/kubernetes.io/serviceaccount", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw", "type": "bind" } ], "ociVersion": "1.2.1", "process": { "args": [ "/coredns", "-conf", "/etc/coredns/Corefile" ], "capabilities": { "bounding": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "effective": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "permitted": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ] }, "cwd": "/", "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME=coredns-coredns-64fc886fd4-5bf6l", "COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "COREDNS_COREDNS_SERVICE_PORT=53", "COREDNS_COREDNS_SERVICE_PORT_UDP_53=53", "COREDNS_COREDNS_PORT_53_UDP=udp://10.96.234.94:53", "COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.234.94:53", "COREDNS_COREDNS_PORT_53_TCP_PORT=53", "KUBERNETES_SERVICE_HOST=10.96.0.1", "KUBERNETES_SERVICE_PORT=443", "KUBERNETES_PORT=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "COREDNS_COREDNS_SERVICE_PORT_TCP_53=53", "COREDNS_COREDNS_PORT=udp://10.96.234.94:53", "COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.234.94", "COREDNS_COREDNS_PORT_53_UDP_PORT=53", "COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.234.94", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "KUBERNETES_PORT_443_TCP_PORT=443", "COREDNS_COREDNS_SERVICE_HOST=10.96.234.94", "COREDNS_COREDNS_PORT_53_UDP_PROTO=udp" ], "oomScoreAdj": -997, "user": { "additionalGids": [ 0 ], "gid": 0, "uid": 0 } }, "root": { "path": "rootfs" } }, "runtimeType": "io.containerd.runc.v2", "sandboxID": "240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e", "snapshotKey": "fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9", "snapshotter": "overlayfs" }, "status": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "createdAt": "2025-09-22T02:47:32.04505971Z", "exitCode": 0, "finishedAt": "0001-01-01T00:00:00Z", "id": "fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9", "image": { "annotations": {}, "image": "docker.io/coredns/coredns:1.7.1", "runtimeHandler": "", "userSpecifiedImage": "" }, "imageId": "", "imageRef": "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-5bf6l", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "0412b191-f9bf-4fc1-904b-bf7034dffe7e" }, "logPath": "/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-5bf6l_0412b191-f9bf-4fc1-904b-bf7034dffe7e/coredns/0.log", "message": "", "metadata": { "attempt": 0, "name": "coredns" }, "mounts": [ { "containerPath": "/etc/coredns", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/var/run/secrets/kubernetes.io/serviceaccount", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/etc/hosts", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/dev/termination-log", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] } ], "reason": "", "resources": { "linux": { "cpuPeriod": "100000", "cpuQuota": "10000", "cpuShares": "102", "cpusetCpus": "", "cpusetMems": "", "hugepageLimits": [], "memoryLimitInBytes": "134217728", "memorySwapLimitInBytes": "134217728", "oomScoreAdj": "-997", "unified": {} } }, "startedAt": "2025-09-22T02:47:33.663347876Z", "state": "CONTAINER_RUNNING", "user": { "linux": { "gid": "0", "supplementalGroups": [ "0" ], "uid": "0" } } } } [2025-09-22 02:51:28] INFO -- CNTI: node_pid_by_container_id pid: 3536121 [2025-09-22 02:51:28] INFO -- CNTI: node pid (should never be pid 1): 3536121 [2025-09-22 02:51:28] INFO -- CNTI: node name : latest-worker2 [2025-09-22 02:51:28] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:28] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:28] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:28] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:28] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:28] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:51:28] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "3536121\n", error: ""} [2025-09-22 02:51:28] INFO -- CNTI: parsed pids: ["3536121"] [2025-09-22 02:51:28] INFO -- CNTI: all_statuses_by_pids [2025-09-22 02:51:28] INFO -- CNTI: all_statuses_by_pids pid: 3536121 [2025-09-22 02:51:28] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:28] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:28] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:28] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:28] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:28] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:51:29] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536121\nNgid:\t0\nPid:\t3536121\nPPid:\t3536069\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3536121\t1\nNSpid:\t3536121\t1\nNSpgid:\t3536121\t1\nNSsid:\t3536121\t1\nVmPeak:\t 747724 kB\nVmSize:\t 747724 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 42756 kB\nVmRSS:\t 42756 kB\nRssAnon:\t 12428 kB\nRssFile:\t 30328 kB\nRssShmem:\t 0 kB\nVmData:\t 107912 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 196 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t19\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2505\nnonvoluntary_ctxt_switches:\t16\n", error: ""} [2025-09-22 02:51:29] DEBUG -- CNTI: proc process_statuses_by_node: ["Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536121\nNgid:\t0\nPid:\t3536121\nPPid:\t3536069\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3536121\t1\nNSpid:\t3536121\t1\nNSpgid:\t3536121\t1\nNSsid:\t3536121\t1\nVmPeak:\t 747724 kB\nVmSize:\t 747724 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 42756 kB\nVmRSS:\t 42756 kB\nRssAnon:\t 12428 kB\nRssFile:\t 30328 kB\nRssShmem:\t 0 kB\nVmData:\t 107912 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 196 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t19\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2505\nnonvoluntary_ctxt_switches:\t16\n"] [2025-09-22 02:51:29] INFO -- CNTI-proctree_by_pid: proctree_by_pid potential_parent_pid: 3536121 [2025-09-22 02:51:29] DEBUG -- CNTI-proctree_by_pid: proc_statuses: ["Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536121\nNgid:\t0\nPid:\t3536121\nPPid:\t3536069\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3536121\t1\nNSpid:\t3536121\t1\nNSpgid:\t3536121\t1\nNSsid:\t3536121\t1\nVmPeak:\t 747724 kB\nVmSize:\t 747724 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 42756 kB\nVmRSS:\t 42756 kB\nRssAnon:\t 12428 kB\nRssFile:\t 30328 kB\nRssShmem:\t 0 kB\nVmData:\t 107912 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 196 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t19\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2505\nnonvoluntary_ctxt_switches:\t16\n"] [2025-09-22 02:51:29] DEBUG -- CNTI: parse_status status_output: Name: coredns Umask: 0022 State: S (sleeping) Tgid: 3536121 Ngid: 0 Pid: 3536121 PPid: 3536069 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 3536121 1 NSpid: 3536121 1 NSpgid: 3536121 1 NSsid: 3536121 1 VmPeak: 747724 kB VmSize: 747724 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 42756 kB VmRSS: 42756 kB RssAnon: 12428 kB RssFile: 30328 kB RssShmem: 0 kB VmData: 107912 kB VmStk: 132 kB VmExe: 22032 kB VmLib: 8 kB VmPTE: 196 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 19 SigQ: 3/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffe7fc1feff CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 2505 nonvoluntary_ctxt_switches: 16 [2025-09-22 02:51:29] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "3536121", "Ngid" => "0", "Pid" => "3536121", "PPid" => "3536069", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "3536121\t1", "NSpid" => "3536121\t1", "NSpgid" => "3536121\t1", "NSsid" => "3536121\t1", "VmPeak" => "747724 kB", "VmSize" => "747724 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "42756 kB", "VmRSS" => "42756 kB", "RssAnon" => "12428 kB", "RssFile" => "30328 kB", "RssShmem" => "0 kB", "VmData" => "107912 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "196 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "19", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2505", "nonvoluntary_ctxt_switches" => "16"} [2025-09-22 02:51:29] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:51:29] INFO -- CNTI: cmdline_by_pid [2025-09-22 02:51:29] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:29] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:29] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:29] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:29] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:29] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg ✔️ 🏆PASSED: [single_process_type] Only one process type used ⚖👀 [2025-09-22 02:51:29] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-09-22 02:51:29] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-09-22 02:51:29] DEBUG -- CNTI-proctree_by_pid: current_pid == potential_parent_pid [2025-09-22 02:51:29] DEBUG -- CNTI-proctree_by_pid: proctree: [{"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "3536121", "Ngid" => "0", "Pid" => "3536121", "PPid" => "3536069", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "3536121\t1", "NSpid" => "3536121\t1", "NSpgid" => "3536121\t1", "NSsid" => "3536121\t1", "VmPeak" => "747724 kB", "VmSize" => "747724 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "42756 kB", "VmRSS" => "42756 kB", "RssAnon" => "12428 kB", "RssFile" => "30328 kB", "RssShmem" => "0 kB", "VmData" => "107912 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "196 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "19", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2505", "nonvoluntary_ctxt_switches" => "16", "cmdline" => "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000"}] [2025-09-22 02:51:29] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:51:29] INFO -- CNTI-single_process_type: status name: coredns [2025-09-22 02:51:29] INFO -- CNTI-single_process_type: previous status name: initial_name [2025-09-22 02:51:29] INFO -- CNTI: container_status_result.all?(true): false [2025-09-22 02:51:29] INFO -- CNTI: pod_resp.all?(true): false [2025-09-22 02:51:29] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'single_process_type' emoji: ⚖👀 [2025-09-22 02:51:29] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'single_process_type' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:29] DEBUG -- CNTI-CNFManager.Points: Task: 'single_process_type' type: essential [2025-09-22 02:51:29] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-09-22 02:51:29] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'single_process_type' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:51:29] DEBUG -- CNTI-CNFManager.Points: Task: 'single_process_type' type: essential [2025-09-22 02:51:29] DEBUG -- CNTI-CNFManager.Points.upsert_task-single_process_type: Task start time: 2025-09-22 02:51:27 UTC, end time: 2025-09-22 02:51:29 UTC [2025-09-22 02:51:29] INFO -- CNTI-CNFManager.Points.upsert_task-single_process_type: Task: 'single_process_type' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:01.937134980 [2025-09-22 02:51:29] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:29] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:51:29] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:29] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:29] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:51:29] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:51:29] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:51:29] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:51:29] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [zombie_handled] [2025-09-22 02:51:29] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:51:29] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:51:29] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:51:29] INFO -- CNTI-CNFManager.Task.task_runner.zombie_handled: Starting test [2025-09-22 02:51:29] INFO -- CNTI-CNFManager.resource_refs: Yielding resources: ["replicaset", "deployment", "statefulset", "pod", "daemonset"] [2025-09-22 02:51:29] DEBUG -- CNTI-CNFManager.cnf_resources: Map block to CNF resources [2025-09-22 02:51:29] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:51:29] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:29] DEBUG -- CNTI-KubectlClient.Get.pods_by_resource_labels: Creating list of pods by resource: Deployment/coredns-coredns labels [2025-09-22 02:51:29] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:29] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-09-22 02:51:29] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:29] DEBUG -- CNTI-KubectlClient.Get.pods_by_labels: Creating list of pods that have labels: {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"} [2025-09-22 02:51:29] INFO -- CNTI-KubectlClient.Get.pods_by_labels: Matched 1 pods: coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:51:29] INFO -- CNTI: pod_name: coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:51:29] INFO -- CNTI: container_statuses: [{"allocatedResources" => {"cpu" => "100m", "memory" => "128Mi"}, "containerID" => "containerd://fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9", "image" => "docker.io/coredns/coredns:1.7.1", "imageID" => "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "lastState" => {}, "name" => "coredns", "ready" => true, "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "restartCount" => 0, "started" => true, "state" => {"running" => {"startedAt" => "2025-09-22T02:47:33Z"}}, "user" => {"linux" => {"gid" => 0, "supplementalGroups" => [0], "uid" => 0}}, "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}, {"mountPath" => "/var/run/secrets/kubernetes.io/serviceaccount", "name" => "kube-api-access-ls7sw", "readOnly" => true, "recursiveReadOnly" => "Disabled"}]}] [2025-09-22 02:51:29] INFO -- CNTI: pod_name: coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:51:29] DEBUG -- CNTI-KubectlClient.Get.nodes_by_pod: Finding nodes with pod/coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:51:29] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-09-22 02:51:30] INFO -- CNTI-KubectlClient.Get.nodes_by_pod: Nodes with pod/coredns-coredns-64fc886fd4-5bf6l list: latest-worker2 [2025-09-22 02:51:30] INFO -- CNTI: nodes_by_resource done [2025-09-22 02:51:30] INFO -- CNTI: before ready containerStatuses container_id fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9 [2025-09-22 02:51:30] INFO -- CNTI: containerStatuses container_id fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9 [2025-09-22 02:51:30] INFO -- CNTI: node_pid_by_container_id container_id: fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9 [2025-09-22 02:51:30] INFO -- CNTI: parse_container_id container_id: fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9 [2025-09-22 02:51:30] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:30] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:30] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:30] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:30] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:30] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:51:30] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: time="2025-09-22T02:51:30Z" level=warning msg="Config \"/etc/crictl.yaml\" does not exist, trying next: \"/usr/local/bin/crictl.yaml\"" time="2025-09-22T02:51:30Z" level=warning msg="runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead." [2025-09-22 02:51:30] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "{\n \"info\": {\n \"config\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"envs\": [\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT_HTTPS\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_UDP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP\",\n \"value\": \"udp://10.96.234.94:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP\",\n \"value\": \"tcp://10.96.234.94:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_ADDR\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_TCP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT\",\n \"value\": \"udp://10.96.234.94:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_ADDR\",\n \"value\": \"10.96.234.94\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_ADDR\",\n \"value\": \"10.96.234.94\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_HOST\",\n \"value\": \"10.96.234.94\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PROTO\",\n \"value\": \"udp\"\n }\n ],\n \"image\": {\n \"image\": \"sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f\",\n \"user_specified_image\": \"coredns/coredns:1.7.1\"\n },\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-5bf6l\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"0412b191-f9bf-4fc1-904b-bf7034dffe7e\"\n },\n \"linux\": {\n \"resources\": {\n \"cpu_period\": 100000,\n \"cpu_quota\": 10000,\n \"cpu_shares\": 102,\n \"hugepage_limits\": [\n {\n \"page_size\": \"2MB\"\n },\n {\n \"page_size\": \"1GB\"\n }\n ],\n \"memory_limit_in_bytes\": 134217728,\n \"memory_swap_limit_in_bytes\": 134217728,\n \"oom_score_adj\": -997\n },\n \"security_context\": {\n \"masked_paths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespace_options\": {\n \"pid\": 1,\n \"userns_options\": {\n \"mode\": 2\n }\n },\n \"readonly_paths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"run_as_user\": {},\n \"seccomp\": {\n \"profile_type\": 1\n }\n }\n },\n \"log_path\": \"coredns/0.log\",\n \"metadata\": {\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"container_path\": \"/etc/coredns\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/etc/hosts\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts\"\n },\n {\n \"container_path\": \"/dev/termination-log\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d\"\n }\n ]\n },\n \"pid\": 3536121,\n \"removing\": false,\n \"runtimeOptions\": {\n \"systemd_cgroup\": true\n },\n \"runtimeSpec\": {\n \"annotations\": {\n \"io.kubernetes.cri.container-name\": \"coredns\",\n \"io.kubernetes.cri.container-type\": \"container\",\n \"io.kubernetes.cri.image-name\": \"coredns/coredns:1.7.1\",\n \"io.kubernetes.cri.sandbox-id\": \"240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e\",\n \"io.kubernetes.cri.sandbox-name\": \"coredns-coredns-64fc886fd4-5bf6l\",\n \"io.kubernetes.cri.sandbox-namespace\": \"cnf-default\",\n \"io.kubernetes.cri.sandbox-uid\": \"0412b191-f9bf-4fc1-904b-bf7034dffe7e\"\n },\n \"hooks\": {\n \"createContainer\": [\n {\n \"path\": \"/kind/bin/mount-product-files.sh\"\n }\n ]\n },\n \"linux\": {\n \"cgroupsPath\": \"kubelet-kubepods-pod0412b191_f9bf_4fc1_904b_bf7034dffe7e.slice:cri-containerd:fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9\",\n \"maskedPaths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespaces\": [\n {\n \"type\": \"pid\"\n },\n {\n \"path\": \"/proc/3536096/ns/ipc\",\n \"type\": \"ipc\"\n },\n {\n \"path\": \"/proc/3536096/ns/uts\",\n \"type\": \"uts\"\n },\n {\n \"type\": \"mount\"\n },\n {\n \"path\": \"/proc/3536096/ns/net\",\n \"type\": \"network\"\n }\n ],\n \"readonlyPaths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"resources\": {\n \"cpu\": {\n \"period\": 100000,\n \"quota\": 10000,\n \"shares\": 102\n },\n \"devices\": [\n {\n \"access\": \"rwm\",\n \"allow\": false\n }\n ],\n \"memory\": {\n \"limit\": 134217728,\n \"swap\": 134217728\n }\n }\n },\n \"mounts\": [\n {\n \"destination\": \"/proc\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"proc\",\n \"type\": \"proc\"\n },\n {\n \"destination\": \"/dev\",\n \"options\": [\n \"nosuid\",\n \"strictatime\",\n \"mode=755\",\n \"size=65536k\"\n ],\n \"source\": \"tmpfs\",\n \"type\": \"tmpfs\"\n },\n {\n \"destination\": \"/dev/pts\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"newinstance\",\n \"ptmxmode=0666\",\n \"mode=0620\",\n \"gid=5\"\n ],\n \"source\": \"devpts\",\n \"type\": \"devpts\"\n },\n {\n \"destination\": \"/dev/mqueue\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"mqueue\",\n \"type\": \"mqueue\"\n },\n {\n \"destination\": \"/sys\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"ro\"\n ],\n \"source\": \"sysfs\",\n \"type\": \"sysfs\"\n },\n {\n \"destination\": \"/sys/fs/cgroup\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"relatime\",\n \"ro\"\n ],\n \"source\": \"cgroup\",\n \"type\": \"cgroup\"\n },\n {\n \"destination\": \"/etc/coredns\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hosts\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/termination-log\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hostname\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/hostname\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/resolv.conf\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/resolv.conf\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/shm\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/run/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/shm\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw\",\n \"type\": \"bind\"\n }\n ],\n \"ociVersion\": \"1.2.1\",\n \"process\": {\n \"args\": [\n \"/coredns\",\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"capabilities\": {\n \"bounding\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"effective\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"permitted\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ]\n },\n \"cwd\": \"/\",\n \"env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\n \"HOSTNAME=coredns-coredns-64fc886fd4-5bf6l\",\n \"COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp\",\n \"KUBERNETES_SERVICE_PORT_HTTPS=443\",\n \"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\",\n \"COREDNS_COREDNS_SERVICE_PORT=53\",\n \"COREDNS_COREDNS_SERVICE_PORT_UDP_53=53\",\n \"COREDNS_COREDNS_PORT_53_UDP=udp://10.96.234.94:53\",\n \"COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.234.94:53\",\n \"COREDNS_COREDNS_PORT_53_TCP_PORT=53\",\n \"KUBERNETES_SERVICE_HOST=10.96.0.1\",\n \"KUBERNETES_SERVICE_PORT=443\",\n \"KUBERNETES_PORT=tcp://10.96.0.1:443\",\n \"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\",\n \"COREDNS_COREDNS_SERVICE_PORT_TCP_53=53\",\n \"COREDNS_COREDNS_PORT=udp://10.96.234.94:53\",\n \"COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.234.94\",\n \"COREDNS_COREDNS_PORT_53_UDP_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.234.94\",\n \"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\n \"KUBERNETES_PORT_443_TCP_PORT=443\",\n \"COREDNS_COREDNS_SERVICE_HOST=10.96.234.94\",\n \"COREDNS_COREDNS_PORT_53_UDP_PROTO=udp\"\n ],\n \"oomScoreAdj\": -997,\n \"user\": {\n \"additionalGids\": [\n 0\n ],\n \"gid\": 0,\n \"uid\": 0\n }\n },\n \"root\": {\n \"path\": \"rootfs\"\n }\n },\n \"runtimeType\": \"io.containerd.runc.v2\",\n \"sandboxID\": \"240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e\",\n \"snapshotKey\": \"fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9\",\n \"snapshotter\": \"overlayfs\"\n },\n \"status\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"createdAt\": \"2025-09-22T02:47:32.04505971Z\",\n \"exitCode\": 0,\n \"finishedAt\": \"0001-01-01T00:00:00Z\",\n \"id\": \"fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9\",\n \"image\": {\n \"annotations\": {},\n \"image\": \"docker.io/coredns/coredns:1.7.1\",\n \"runtimeHandler\": \"\",\n \"userSpecifiedImage\": \"\"\n },\n \"imageId\": \"\",\n \"imageRef\": \"docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef\",\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-5bf6l\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"0412b191-f9bf-4fc1-904b-bf7034dffe7e\"\n },\n \"logPath\": \"/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-5bf6l_0412b191-f9bf-4fc1-904b-bf7034dffe7e/coredns/0.log\",\n \"message\": \"\",\n \"metadata\": {\n \"attempt\": 0,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"containerPath\": \"/etc/coredns\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/etc/hosts\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/dev/termination-log\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n }\n ],\n \"reason\": \"\",\n \"resources\": {\n \"linux\": {\n \"cpuPeriod\": \"100000\",\n \"cpuQuota\": \"10000\",\n \"cpuShares\": \"102\",\n \"cpusetCpus\": \"\",\n \"cpusetMems\": \"\",\n \"hugepageLimits\": [],\n \"memoryLimitInBytes\": \"134217728\",\n \"memorySwapLimitInBytes\": \"134217728\",\n \"oomScoreAdj\": \"-997\",\n \"unified\": {}\n }\n },\n \"startedAt\": \"2025-09-22T02:47:33.663347876Z\",\n \"state\": \"CONTAINER_RUNNING\",\n \"user\": {\n \"linux\": {\n \"gid\": \"0\",\n \"supplementalGroups\": [\n \"0\"\n ],\n \"uid\": \"0\"\n }\n }\n }\n}\n", error: "time=\"2025-09-22T02:51:30Z\" level=warning msg=\"Config \\\"/etc/crictl.yaml\\\" does not exist, trying next: \\\"/usr/local/bin/crictl.yaml\\\"\"\ntime=\"2025-09-22T02:51:30Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\n"} [2025-09-22 02:51:30] DEBUG -- CNTI: node_pid_by_container_id inspect: { "info": { "config": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "args": [ "-conf", "/etc/coredns/Corefile" ], "envs": [ { "key": "COREDNS_COREDNS_PORT_53_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_SERVICE_PORT_HTTPS", "value": "443" }, { "key": "KUBERNETES_PORT_443_TCP", "value": "tcp://10.96.0.1:443" }, { "key": "COREDNS_COREDNS_SERVICE_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_UDP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP", "value": "udp://10.96.234.94:53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP", "value": "tcp://10.96.234.94:53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PORT", "value": "53" }, { "key": "KUBERNETES_SERVICE_HOST", "value": "10.96.0.1" }, { "key": "KUBERNETES_SERVICE_PORT", "value": "443" }, { "key": "KUBERNETES_PORT", "value": "tcp://10.96.0.1:443" }, { "key": "KUBERNETES_PORT_443_TCP_ADDR", "value": "10.96.0.1" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_TCP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT", "value": "udp://10.96.234.94:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_ADDR", "value": "10.96.234.94" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_ADDR", "value": "10.96.234.94" }, { "key": "KUBERNETES_PORT_443_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_PORT_443_TCP_PORT", "value": "443" }, { "key": "COREDNS_COREDNS_SERVICE_HOST", "value": "10.96.234.94" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PROTO", "value": "udp" } ], "image": { "image": "sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f", "user_specified_image": "coredns/coredns:1.7.1" }, "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-5bf6l", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "0412b191-f9bf-4fc1-904b-bf7034dffe7e" }, "linux": { "resources": { "cpu_period": 100000, "cpu_quota": 10000, "cpu_shares": 102, "hugepage_limits": [ { "page_size": "2MB" }, { "page_size": "1GB" } ], "memory_limit_in_bytes": 134217728, "memory_swap_limit_in_bytes": 134217728, "oom_score_adj": -997 }, "security_context": { "masked_paths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespace_options": { "pid": 1, "userns_options": { "mode": 2 } }, "readonly_paths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "run_as_user": {}, "seccomp": { "profile_type": 1 } } }, "log_path": "coredns/0.log", "metadata": { "name": "coredns" }, "mounts": [ { "container_path": "/etc/coredns", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume", "readonly": true }, { "container_path": "/var/run/secrets/kubernetes.io/serviceaccount", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw", "readonly": true }, { "container_path": "/etc/hosts", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts" }, { "container_path": "/dev/termination-log", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d" } ] }, "pid": 3536121, "removing": false, "runtimeOptions": { "systemd_cgroup": true }, "runtimeSpec": { "annotations": { "io.kubernetes.cri.container-name": "coredns", "io.kubernetes.cri.container-type": "container", "io.kubernetes.cri.image-name": "coredns/coredns:1.7.1", "io.kubernetes.cri.sandbox-id": "240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e", "io.kubernetes.cri.sandbox-name": "coredns-coredns-64fc886fd4-5bf6l", "io.kubernetes.cri.sandbox-namespace": "cnf-default", "io.kubernetes.cri.sandbox-uid": "0412b191-f9bf-4fc1-904b-bf7034dffe7e" }, "hooks": { "createContainer": [ { "path": "/kind/bin/mount-product-files.sh" } ] }, "linux": { "cgroupsPath": "kubelet-kubepods-pod0412b191_f9bf_4fc1_904b_bf7034dffe7e.slice:cri-containerd:fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9", "maskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespaces": [ { "type": "pid" }, { "path": "/proc/3536096/ns/ipc", "type": "ipc" }, { "path": "/proc/3536096/ns/uts", "type": "uts" }, { "type": "mount" }, { "path": "/proc/3536096/ns/net", "type": "network" } ], "readonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "resources": { "cpu": { "period": 100000, "quota": 10000, "shares": 102 }, "devices": [ { "access": "rwm", "allow": false } ], "memory": { "limit": 134217728, "swap": 134217728 } } }, "mounts": [ { "destination": "/proc", "options": [ "nosuid", "noexec", "nodev" ], "source": "proc", "type": "proc" }, { "destination": "/dev", "options": [ "nosuid", "strictatime", "mode=755", "size=65536k" ], "source": "tmpfs", "type": "tmpfs" }, { "destination": "/dev/pts", "options": [ "nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5" ], "source": "devpts", "type": "devpts" }, { "destination": "/dev/mqueue", "options": [ "nosuid", "noexec", "nodev" ], "source": "mqueue", "type": "mqueue" }, { "destination": "/sys", "options": [ "nosuid", "noexec", "nodev", "ro" ], "source": "sysfs", "type": "sysfs" }, { "destination": "/sys/fs/cgroup", "options": [ "nosuid", "noexec", "nodev", "relatime", "ro" ], "source": "cgroup", "type": "cgroup" }, { "destination": "/etc/coredns", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume", "type": "bind" }, { "destination": "/etc/hosts", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts", "type": "bind" }, { "destination": "/dev/termination-log", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d", "type": "bind" }, { "destination": "/etc/hostname", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/hostname", "type": "bind" }, { "destination": "/etc/resolv.conf", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/resolv.conf", "type": "bind" }, { "destination": "/dev/shm", "options": [ "rbind", "rprivate", "rw" ], "source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/shm", "type": "bind" }, { "destination": "/var/run/secrets/kubernetes.io/serviceaccount", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw", "type": "bind" } ], "ociVersion": "1.2.1", "process": { "args": [ "/coredns", "-conf", "/etc/coredns/Corefile" ], "capabilities": { "bounding": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "effective": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "permitted": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ] }, "cwd": "/", "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME=coredns-coredns-64fc886fd4-5bf6l", "COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "COREDNS_COREDNS_SERVICE_PORT=53", "COREDNS_COREDNS_SERVICE_PORT_UDP_53=53", "COREDNS_COREDNS_PORT_53_UDP=udp://10.96.234.94:53", "COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.234.94:53", "COREDNS_COREDNS_PORT_53_TCP_PORT=53", "KUBERNETES_SERVICE_HOST=10.96.0.1", "KUBERNETES_SERVICE_PORT=443", "KUBERNETES_PORT=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "COREDNS_COREDNS_SERVICE_PORT_TCP_53=53", "COREDNS_COREDNS_PORT=udp://10.96.234.94:53", "COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.234.94", "COREDNS_COREDNS_PORT_53_UDP_PORT=53", "COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.234.94", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "KUBERNETES_PORT_443_TCP_PORT=443", "COREDNS_COREDNS_SERVICE_HOST=10.96.234.94", "COREDNS_COREDNS_PORT_53_UDP_PROTO=udp" ], "oomScoreAdj": -997, "user": { "additionalGids": [ 0 ], "gid": 0, "uid": 0 } }, "root": { "path": "rootfs" } }, "runtimeType": "io.containerd.runc.v2", "sandboxID": "240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e", "snapshotKey": "fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9", "snapshotter": "overlayfs" }, "status": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "createdAt": "2025-09-22T02:47:32.04505971Z", "exitCode": 0, "finishedAt": "0001-01-01T00:00:00Z", "id": "fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9", "image": { "annotations": {}, "image": "docker.io/coredns/coredns:1.7.1", "runtimeHandler": "", "userSpecifiedImage": "" }, "imageId": "", "imageRef": "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-5bf6l", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "0412b191-f9bf-4fc1-904b-bf7034dffe7e" }, "logPath": "/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-5bf6l_0412b191-f9bf-4fc1-904b-bf7034dffe7e/coredns/0.log", "message": "", "metadata": { "attempt": 0, "name": "coredns" }, "mounts": [ { "containerPath": "/etc/coredns", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/var/run/secrets/kubernetes.io/serviceaccount", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/etc/hosts", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/dev/termination-log", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] } ], "reason": "", "resources": { "linux": { "cpuPeriod": "100000", "cpuQuota": "10000", "cpuShares": "102", "cpusetCpus": "", "cpusetMems": "", "hugepageLimits": [], "memoryLimitInBytes": "134217728", "memorySwapLimitInBytes": "134217728", "oomScoreAdj": "-997", "unified": {} } }, "startedAt": "2025-09-22T02:47:33.663347876Z", "state": "CONTAINER_RUNNING", "user": { "linux": { "gid": "0", "supplementalGroups": [ "0" ], "uid": "0" } } } } [2025-09-22 02:51:30] INFO -- CNTI: node_pid_by_container_id pid: 3536121 [2025-09-22 02:51:30] INFO -- CNTI: node pid (should never be pid 1): 3536121 [2025-09-22 02:51:30] INFO -- CNTI: node name : latest-worker2 [2025-09-22 02:51:30] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:30] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:30] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:30] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:30] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:30] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:51:30] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} [2025-09-22 02:51:30] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:30] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:30] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:30] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:30] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:30] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:51:31] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} [2025-09-22 02:51:31] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:31] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:31] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:31] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:31] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:31] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:51:33] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Sleeping...\n", error: ""} [2025-09-22 02:51:33] INFO -- CNTI: container_status_result.all?(true): false [2025-09-22 02:51:33] INFO -- CNTI: pod_resp.all?(true): false [2025-09-22 02:51:43] INFO -- CNTI-CNFManager.workload_resource_test: Starting test [2025-09-22 02:51:43] INFO -- CNTI-CNFManager.resource_refs: Yielding resources: ["replicaset", "deployment", "statefulset", "pod", "daemonset"] [2025-09-22 02:51:43] DEBUG -- CNTI-CNFManager.cnf_resources: Map block to CNF resources [2025-09-22 02:51:43] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:51:43] DEBUG -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-09-22 02:51:43] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-09-22 02:51:43] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:43] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-09-22 02:51:43] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:43] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:44] DEBUG -- CNTI-KubectlClient.Get.pods_by_resource_labels: Creating list of pods by resource: Deployment/coredns-coredns labels [2025-09-22 02:51:44] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:44] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-09-22 02:51:44] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:51:44] DEBUG -- CNTI-KubectlClient.Get.pods_by_labels: Creating list of pods that have labels: {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"} [2025-09-22 02:51:44] INFO -- CNTI-KubectlClient.Get.pods_by_labels: Matched 1 pods: coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:51:44] INFO -- CNTI: pod_name: coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:51:44] INFO -- CNTI: container_statuses: [{"allocatedResources" => {"cpu" => "100m", "memory" => "128Mi"}, "containerID" => "containerd://fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9", "image" => "docker.io/coredns/coredns:1.7.1", "imageID" => "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "lastState" => {}, "name" => "coredns", "ready" => true, "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "restartCount" => 0, "started" => true, "state" => {"running" => {"startedAt" => "2025-09-22T02:47:33Z"}}, "user" => {"linux" => {"gid" => 0, "supplementalGroups" => [0], "uid" => 0}}, "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}, {"mountPath" => "/var/run/secrets/kubernetes.io/serviceaccount", "name" => "kube-api-access-ls7sw", "readOnly" => true, "recursiveReadOnly" => "Disabled"}]}] [2025-09-22 02:51:44] INFO -- CNTI: pod_name: coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:51:44] DEBUG -- CNTI-KubectlClient.Get.nodes_by_pod: Finding nodes with pod/coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:51:44] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-09-22 02:51:44] INFO -- CNTI-KubectlClient.Get.nodes_by_pod: Nodes with pod/coredns-coredns-64fc886fd4-5bf6l list: latest-worker2 [2025-09-22 02:51:44] INFO -- CNTI: nodes_by_resource done [2025-09-22 02:51:44] INFO -- CNTI: before ready containerStatuses container_id fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9 [2025-09-22 02:51:44] INFO -- CNTI: containerStatuses container_id fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9 [2025-09-22 02:51:44] INFO -- CNTI: node_pid_by_container_id container_id: fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9 [2025-09-22 02:51:44] INFO -- CNTI: parse_container_id container_id: fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9 [2025-09-22 02:51:44] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:44] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:44] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:44] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:44] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:44] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:51:44] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: time="2025-09-22T02:51:44Z" level=warning msg="Config \"/etc/crictl.yaml\" does not exist, trying next: \"/usr/local/bin/crictl.yaml\"" time="2025-09-22T02:51:44Z" level=warning msg="runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead." [2025-09-22 02:51:44] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "{\n \"info\": {\n \"config\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"envs\": [\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT_HTTPS\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_UDP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP\",\n \"value\": \"udp://10.96.234.94:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP\",\n \"value\": \"tcp://10.96.234.94:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_ADDR\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_TCP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT\",\n \"value\": \"udp://10.96.234.94:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_ADDR\",\n \"value\": \"10.96.234.94\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_ADDR\",\n \"value\": \"10.96.234.94\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_HOST\",\n \"value\": \"10.96.234.94\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PROTO\",\n \"value\": \"udp\"\n }\n ],\n \"image\": {\n \"image\": \"sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f\",\n \"user_specified_image\": \"coredns/coredns:1.7.1\"\n },\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-5bf6l\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"0412b191-f9bf-4fc1-904b-bf7034dffe7e\"\n },\n \"linux\": {\n \"resources\": {\n \"cpu_period\": 100000,\n \"cpu_quota\": 10000,\n \"cpu_shares\": 102,\n \"hugepage_limits\": [\n {\n \"page_size\": \"2MB\"\n },\n {\n \"page_size\": \"1GB\"\n }\n ],\n \"memory_limit_in_bytes\": 134217728,\n \"memory_swap_limit_in_bytes\": 134217728,\n \"oom_score_adj\": -997\n },\n \"security_context\": {\n \"masked_paths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespace_options\": {\n \"pid\": 1,\n \"userns_options\": {\n \"mode\": 2\n }\n },\n \"readonly_paths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"run_as_user\": {},\n \"seccomp\": {\n \"profile_type\": 1\n }\n }\n },\n \"log_path\": \"coredns/0.log\",\n \"metadata\": {\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"container_path\": \"/etc/coredns\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/etc/hosts\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts\"\n },\n {\n \"container_path\": \"/dev/termination-log\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d\"\n }\n ]\n },\n \"pid\": 3536121,\n \"removing\": false,\n \"runtimeOptions\": {\n \"systemd_cgroup\": true\n },\n \"runtimeSpec\": {\n \"annotations\": {\n \"io.kubernetes.cri.container-name\": \"coredns\",\n \"io.kubernetes.cri.container-type\": \"container\",\n \"io.kubernetes.cri.image-name\": \"coredns/coredns:1.7.1\",\n \"io.kubernetes.cri.sandbox-id\": \"240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e\",\n \"io.kubernetes.cri.sandbox-name\": \"coredns-coredns-64fc886fd4-5bf6l\",\n \"io.kubernetes.cri.sandbox-namespace\": \"cnf-default\",\n \"io.kubernetes.cri.sandbox-uid\": \"0412b191-f9bf-4fc1-904b-bf7034dffe7e\"\n },\n \"hooks\": {\n \"createContainer\": [\n {\n \"path\": \"/kind/bin/mount-product-files.sh\"\n }\n ]\n },\n \"linux\": {\n \"cgroupsPath\": \"kubelet-kubepods-pod0412b191_f9bf_4fc1_904b_bf7034dffe7e.slice:cri-containerd:fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9\",\n \"maskedPaths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespaces\": [\n {\n \"type\": \"pid\"\n },\n {\n \"path\": \"/proc/3536096/ns/ipc\",\n \"type\": \"ipc\"\n },\n {\n \"path\": \"/proc/3536096/ns/uts\",\n \"type\": \"uts\"\n },\n {\n \"type\": \"mount\"\n },\n {\n \"path\": \"/proc/3536096/ns/net\",\n \"type\": \"network\"\n }\n ],\n \"readonlyPaths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"resources\": {\n \"cpu\": {\n \"period\": 100000,\n \"quota\": 10000,\n \"shares\": 102\n },\n \"devices\": [\n {\n \"access\": \"rwm\",\n \"allow\": false\n }\n ],\n \"memory\": {\n \"limit\": 134217728,\n \"swap\": 134217728\n }\n }\n },\n \"mounts\": [\n {\n \"destination\": \"/proc\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"proc\",\n \"type\": \"proc\"\n },\n {\n \"destination\": \"/dev\",\n \"options\": [\n \"nosuid\",\n \"strictatime\",\n \"mode=755\",\n \"size=65536k\"\n ],\n \"source\": \"tmpfs\",\n \"type\": \"tmpfs\"\n },\n {\n \"destination\": \"/dev/pts\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"newinstance\",\n \"ptmxmode=0666\",\n \"mode=0620\",\n \"gid=5\"\n ],\n \"source\": \"devpts\",\n \"type\": \"devpts\"\n },\n {\n \"destination\": \"/dev/mqueue\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"mqueue\",\n \"type\": \"mqueue\"\n },\n {\n \"destination\": \"/sys\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"ro\"\n ],\n \"source\": \"sysfs\",\n \"type\": \"sysfs\"\n },\n {\n \"destination\": \"/sys/fs/cgroup\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"relatime\",\n \"ro\"\n ],\n \"source\": \"cgroup\",\n \"type\": \"cgroup\"\n },\n {\n \"destination\": \"/etc/coredns\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hosts\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/termination-log\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hostname\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/hostname\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/resolv.conf\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/resolv.conf\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/shm\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/run/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/shm\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw\",\n \"type\": \"bind\"\n }\n ],\n \"ociVersion\": \"1.2.1\",\n \"process\": {\n \"args\": [\n \"/coredns\",\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"capabilities\": {\n \"bounding\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"effective\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"permitted\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ]\n },\n \"cwd\": \"/\",\n \"env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\n \"HOSTNAME=coredns-coredns-64fc886fd4-5bf6l\",\n \"COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp\",\n \"KUBERNETES_SERVICE_PORT_HTTPS=443\",\n \"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\",\n \"COREDNS_COREDNS_SERVICE_PORT=53\",\n \"COREDNS_COREDNS_SERVICE_PORT_UDP_53=53\",\n \"COREDNS_COREDNS_PORT_53_UDP=udp://10.96.234.94:53\",\n \"COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.234.94:53\",\n \"COREDNS_COREDNS_PORT_53_TCP_PORT=53\",\n \"KUBERNETES_SERVICE_HOST=10.96.0.1\",\n \"KUBERNETES_SERVICE_PORT=443\",\n \"KUBERNETES_PORT=tcp://10.96.0.1:443\",\n \"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\",\n \"COREDNS_COREDNS_SERVICE_PORT_TCP_53=53\",\n \"COREDNS_COREDNS_PORT=udp://10.96.234.94:53\",\n \"COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.234.94\",\n \"COREDNS_COREDNS_PORT_53_UDP_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.234.94\",\n \"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\n \"KUBERNETES_PORT_443_TCP_PORT=443\",\n \"COREDNS_COREDNS_SERVICE_HOST=10.96.234.94\",\n \"COREDNS_COREDNS_PORT_53_UDP_PROTO=udp\"\n ],\n \"oomScoreAdj\": -997,\n \"user\": {\n \"additionalGids\": [\n 0\n ],\n \"gid\": 0,\n \"uid\": 0\n }\n },\n \"root\": {\n \"path\": \"rootfs\"\n }\n },\n \"runtimeType\": \"io.containerd.runc.v2\",\n \"sandboxID\": \"240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e\",\n \"snapshotKey\": \"fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9\",\n \"snapshotter\": \"overlayfs\"\n },\n \"status\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"createdAt\": \"2025-09-22T02:47:32.04505971Z\",\n \"exitCode\": 0,\n \"finishedAt\": \"0001-01-01T00:00:00Z\",\n \"id\": \"fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9\",\n \"image\": {\n \"annotations\": {},\n \"image\": \"docker.io/coredns/coredns:1.7.1\",\n \"runtimeHandler\": \"\",\n \"userSpecifiedImage\": \"\"\n },\n \"imageId\": \"\",\n \"imageRef\": \"docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef\",\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-5bf6l\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"0412b191-f9bf-4fc1-904b-bf7034dffe7e\"\n },\n \"logPath\": \"/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-5bf6l_0412b191-f9bf-4fc1-904b-bf7034dffe7e/coredns/0.log\",\n \"message\": \"\",\n \"metadata\": {\n \"attempt\": 0,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"containerPath\": \"/etc/coredns\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/etc/hosts\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/dev/termination-log\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n }\n ],\n \"reason\": \"\",\n \"resources\": {\n \"linux\": {\n \"cpuPeriod\": \"100000\",\n \"cpuQuota\": \"10000\",\n \"cpuShares\": \"102\",\n \"cpusetCpus\": \"\",\n \"cpusetMems\": \"\",\n \"hugepageLimits\": [],\n \"memoryLimitInBytes\": \"134217728\",\n \"memorySwapLimitInBytes\": \"134217728\",\n \"oomScoreAdj\": \"-997\",\n \"unified\": {}\n }\n },\n \"startedAt\": \"2025-09-22T02:47:33.663347876Z\",\n \"state\": \"CONTAINER_RUNNING\",\n \"user\": {\n \"linux\": {\n \"gid\": \"0\",\n \"supplementalGroups\": [\n \"0\"\n ],\n \"uid\": \"0\"\n }\n }\n }\n}\n", error: "time=\"2025-09-22T02:51:44Z\" level=warning msg=\"Config \\\"/etc/crictl.yaml\\\" does not exist, trying next: \\\"/usr/local/bin/crictl.yaml\\\"\"\ntime=\"2025-09-22T02:51:44Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\n"} [2025-09-22 02:51:44] DEBUG -- CNTI: node_pid_by_container_id inspect: { "info": { "config": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "args": [ "-conf", "/etc/coredns/Corefile" ], "envs": [ { "key": "COREDNS_COREDNS_PORT_53_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_SERVICE_PORT_HTTPS", "value": "443" }, { "key": "KUBERNETES_PORT_443_TCP", "value": "tcp://10.96.0.1:443" }, { "key": "COREDNS_COREDNS_SERVICE_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_UDP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP", "value": "udp://10.96.234.94:53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP", "value": "tcp://10.96.234.94:53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PORT", "value": "53" }, { "key": "KUBERNETES_SERVICE_HOST", "value": "10.96.0.1" }, { "key": "KUBERNETES_SERVICE_PORT", "value": "443" }, { "key": "KUBERNETES_PORT", "value": "tcp://10.96.0.1:443" }, { "key": "KUBERNETES_PORT_443_TCP_ADDR", "value": "10.96.0.1" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_TCP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT", "value": "udp://10.96.234.94:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_ADDR", "value": "10.96.234.94" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_ADDR", "value": "10.96.234.94" }, { "key": "KUBERNETES_PORT_443_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_PORT_443_TCP_PORT", "value": "443" }, { "key": "COREDNS_COREDNS_SERVICE_HOST", "value": "10.96.234.94" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PROTO", "value": "udp" } ], "image": { "image": "sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f", "user_specified_image": "coredns/coredns:1.7.1" }, "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-5bf6l", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "0412b191-f9bf-4fc1-904b-bf7034dffe7e" }, "linux": { "resources": { "cpu_period": 100000, "cpu_quota": 10000, "cpu_shares": 102, "hugepage_limits": [ { "page_size": "2MB" }, { "page_size": "1GB" } ], "memory_limit_in_bytes": 134217728, "memory_swap_limit_in_bytes": 134217728, "oom_score_adj": -997 }, "security_context": { "masked_paths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespace_options": { "pid": 1, "userns_options": { "mode": 2 } }, "readonly_paths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "run_as_user": {}, "seccomp": { "profile_type": 1 } } }, "log_path": "coredns/0.log", "metadata": { "name": "coredns" }, "mounts": [ { "container_path": "/etc/coredns", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume", "readonly": true }, { "container_path": "/var/run/secrets/kubernetes.io/serviceaccount", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw", "readonly": true }, { "container_path": "/etc/hosts", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts" }, { "container_path": "/dev/termination-log", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d" } ] }, "pid": 3536121, "removing": false, "runtimeOptions": { "systemd_cgroup": true }, "runtimeSpec": { "annotations": { "io.kubernetes.cri.container-name": "coredns", "io.kubernetes.cri.container-type": "container", "io.kubernetes.cri.image-name": "coredns/coredns:1.7.1", "io.kubernetes.cri.sandbox-id": "240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e", "io.kubernetes.cri.sandbox-name": "coredns-coredns-64fc886fd4-5bf6l", "io.kubernetes.cri.sandbox-namespace": "cnf-default", "io.kubernetes.cri.sandbox-uid": "0412b191-f9bf-4fc1-904b-bf7034dffe7e" }, "hooks": { "createContainer": [ { "path": "/kind/bin/mount-product-files.sh" } ] }, "linux": { "cgroupsPath": "kubelet-kubepods-pod0412b191_f9bf_4fc1_904b_bf7034dffe7e.slice:cri-containerd:fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9", "maskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespaces": [ { "type": "pid" }, { "path": "/proc/3536096/ns/ipc", "type": "ipc" }, { "path": "/proc/3536096/ns/uts", "type": "uts" }, { "type": "mount" }, { "path": "/proc/3536096/ns/net", "type": "network" } ], "readonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "resources": { "cpu": { "period": 100000, "quota": 10000, "shares": 102 }, "devices": [ { "access": "rwm", "allow": false } ], "memory": { "limit": 134217728, "swap": 134217728 } } }, "mounts": [ { "destination": "/proc", "options": [ "nosuid", "noexec", "nodev" ], "source": "proc", "type": "proc" }, { "destination": "/dev", "options": [ "nosuid", "strictatime", "mode=755", "size=65536k" ], "source": "tmpfs", "type": "tmpfs" }, { "destination": "/dev/pts", "options": [ "nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5" ], "source": "devpts", "type": "devpts" }, { "destination": "/dev/mqueue", "options": [ "nosuid", "noexec", "nodev" ], "source": "mqueue", "type": "mqueue" }, { "destination": "/sys", "options": [ "nosuid", "noexec", "nodev", "ro" ], "source": "sysfs", "type": "sysfs" }, { "destination": "/sys/fs/cgroup", "options": [ "nosuid", "noexec", "nodev", "relatime", "ro" ], "source": "cgroup", "type": "cgroup" }, { "destination": "/etc/coredns", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume", "type": "bind" }, { "destination": "/etc/hosts", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts", "type": "bind" }, { "destination": "/dev/termination-log", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d", "type": "bind" }, { "destination": "/etc/hostname", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/hostname", "type": "bind" }, { "destination": "/etc/resolv.conf", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/resolv.conf", "type": "bind" }, { "destination": "/dev/shm", "options": [ "rbind", "rprivate", "rw" ], "source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/shm", "type": "bind" }, { "destination": "/var/run/secrets/kubernetes.io/serviceaccount", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw", "type": "bind" } ], "ociVersion": "1.2.1", "process": { "args": [ "/coredns", "-conf", "/etc/coredns/Corefile" ], "capabilities": { "bounding": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "effective": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "permitted": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ] }, "cwd": "/", "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME=coredns-coredns-64fc886fd4-5bf6l", "COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "COREDNS_COREDNS_SERVICE_PORT=53", "COREDNS_COREDNS_SERVICE_PORT_UDP_53=53", "COREDNS_COREDNS_PORT_53_UDP=udp://10.96.234.94:53", "COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.234.94:53", "COREDNS_COREDNS_PORT_53_TCP_PORT=53", "KUBERNETES_SERVICE_HOST=10.96.0.1", "KUBERNETES_SERVICE_PORT=443", "KUBERNETES_PORT=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "COREDNS_COREDNS_SERVICE_PORT_TCP_53=53", "COREDNS_COREDNS_PORT=udp://10.96.234.94:53", "COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.234.94", "COREDNS_COREDNS_PORT_53_UDP_PORT=53", "COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.234.94", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "KUBERNETES_PORT_443_TCP_PORT=443", "COREDNS_COREDNS_SERVICE_HOST=10.96.234.94", "COREDNS_COREDNS_PORT_53_UDP_PROTO=udp" ], "oomScoreAdj": -997, "user": { "additionalGids": [ 0 ], "gid": 0, "uid": 0 } }, "root": { "path": "rootfs" } }, "runtimeType": "io.containerd.runc.v2", "sandboxID": "240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e", "snapshotKey": "fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9", "snapshotter": "overlayfs" }, "status": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "createdAt": "2025-09-22T02:47:32.04505971Z", "exitCode": 0, "finishedAt": "0001-01-01T00:00:00Z", "id": "fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9", "image": { "annotations": {}, "image": "docker.io/coredns/coredns:1.7.1", "runtimeHandler": "", "userSpecifiedImage": "" }, "imageId": "", "imageRef": "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-5bf6l", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "0412b191-f9bf-4fc1-904b-bf7034dffe7e" }, "logPath": "/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-5bf6l_0412b191-f9bf-4fc1-904b-bf7034dffe7e/coredns/0.log", "message": "", "metadata": { "attempt": 0, "name": "coredns" }, "mounts": [ { "containerPath": "/etc/coredns", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/var/run/secrets/kubernetes.io/serviceaccount", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/etc/hosts", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/dev/termination-log", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/836eb71d", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] } ], "reason": "", "resources": { "linux": { "cpuPeriod": "100000", "cpuQuota": "10000", "cpuShares": "102", "cpusetCpus": "", "cpusetMems": "", "hugepageLimits": [], "memoryLimitInBytes": "134217728", "memorySwapLimitInBytes": "134217728", "oomScoreAdj": "-997", "unified": {} } }, "startedAt": "2025-09-22T02:47:33.663347876Z", "state": "CONTAINER_RUNNING", "user": { "linux": { "gid": "0", "supplementalGroups": [ "0" ], "uid": "0" } } } } [2025-09-22 02:51:44] INFO -- CNTI: node_pid_by_container_id pid: 3536121 [2025-09-22 02:51:44] INFO -- CNTI: node pid (should never be pid 1): 3536121 [2025-09-22 02:51:44] INFO -- CNTI: node name : latest-worker2 [2025-09-22 02:51:44] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:44] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:44] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:44] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:44] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:44] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:51:45] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "3536121\n3537926\n", error: ""} [2025-09-22 02:51:45] INFO -- CNTI: parsed pids: ["3536121", "3537926"] [2025-09-22 02:51:45] INFO -- CNTI: all_statuses_by_pids [2025-09-22 02:51:45] INFO -- CNTI: all_statuses_by_pids pid: 3536121 [2025-09-22 02:51:45] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:45] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:45] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:45] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:45] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:45] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:51:45] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536121\nNgid:\t0\nPid:\t3536121\nPPid:\t3536069\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3536121\t1\nNSpid:\t3536121\t1\nNSpgid:\t3536121\t1\nNSsid:\t3536121\t1\nVmPeak:\t 747724 kB\nVmSize:\t 747724 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 42756 kB\nVmRSS:\t 42756 kB\nRssAnon:\t 12428 kB\nRssFile:\t 30328 kB\nRssShmem:\t 0 kB\nVmData:\t 107912 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 196 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t19\nSigQ:\t4/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2699\nnonvoluntary_ctxt_switches:\t17\n", error: ""} [2025-09-22 02:51:45] INFO -- CNTI: all_statuses_by_pids pid: 3537926 [2025-09-22 02:51:45] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:45] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:45] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:45] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:45] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:45] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:51:45] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsleep\nState:\tZ (zombie)\nTgid:\t3537926\nNgid:\t0\nPid:\t3537926\nPPid:\t3536121\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t0\nGroups:\t0 \nNStgid:\t3537926\t38\nNSpid:\t3537926\t38\nNSpgid:\t3537920\t32\nNSsid:\t3537920\t32\nThreads:\t1\nSigQ:\t4/256612\nSigPnd:\t0000000000001000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-09-22 02:51:45] DEBUG -- CNTI: proc process_statuses_by_node: ["Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536121\nNgid:\t0\nPid:\t3536121\nPPid:\t3536069\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3536121\t1\nNSpid:\t3536121\t1\nNSpgid:\t3536121\t1\nNSsid:\t3536121\t1\nVmPeak:\t 747724 kB\nVmSize:\t 747724 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 42756 kB\nVmRSS:\t 42756 kB\nRssAnon:\t 12428 kB\nRssFile:\t 30328 kB\nRssShmem:\t 0 kB\nVmData:\t 107912 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 196 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t19\nSigQ:\t4/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2699\nnonvoluntary_ctxt_switches:\t17\n", "Name:\tsleep\nState:\tZ (zombie)\nTgid:\t3537926\nNgid:\t0\nPid:\t3537926\nPPid:\t3536121\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t0\nGroups:\t0 \nNStgid:\t3537926\t38\nNSpid:\t3537926\t38\nNSpgid:\t3537920\t32\nNSsid:\t3537920\t32\nThreads:\t1\nSigQ:\t4/256612\nSigPnd:\t0000000000001000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2\nnonvoluntary_ctxt_switches:\t0\n"] [2025-09-22 02:51:45] INFO -- CNTI-proctree_by_pid: proctree_by_pid potential_parent_pid: 3536121 [2025-09-22 02:51:45] DEBUG -- CNTI-proctree_by_pid: proc_statuses: ["Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536121\nNgid:\t0\nPid:\t3536121\nPPid:\t3536069\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3536121\t1\nNSpid:\t3536121\t1\nNSpgid:\t3536121\t1\nNSsid:\t3536121\t1\nVmPeak:\t 747724 kB\nVmSize:\t 747724 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 42756 kB\nVmRSS:\t 42756 kB\nRssAnon:\t 12428 kB\nRssFile:\t 30328 kB\nRssShmem:\t 0 kB\nVmData:\t 107912 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 196 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t19\nSigQ:\t4/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2699\nnonvoluntary_ctxt_switches:\t17\n", "Name:\tsleep\nState:\tZ (zombie)\nTgid:\t3537926\nNgid:\t0\nPid:\t3537926\nPPid:\t3536121\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t0\nGroups:\t0 \nNStgid:\t3537926\t38\nNSpid:\t3537926\t38\nNSpgid:\t3537920\t32\nNSsid:\t3537920\t32\nThreads:\t1\nSigQ:\t4/256612\nSigPnd:\t0000000000001000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2\nnonvoluntary_ctxt_switches:\t0\n"] [2025-09-22 02:51:45] DEBUG -- CNTI: parse_status status_output: Name: coredns Umask: 0022 State: S (sleeping) Tgid: 3536121 Ngid: 0 Pid: 3536121 PPid: 3536069 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 3536121 1 NSpid: 3536121 1 NSpgid: 3536121 1 NSsid: 3536121 1 VmPeak: 747724 kB VmSize: 747724 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 42756 kB VmRSS: 42756 kB RssAnon: 12428 kB RssFile: 30328 kB RssShmem: 0 kB VmData: 107912 kB VmStk: 132 kB VmExe: 22032 kB VmLib: 8 kB VmPTE: 196 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 19 SigQ: 4/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffe7fc1feff CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 2699 nonvoluntary_ctxt_switches: 17 [2025-09-22 02:51:45] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "3536121", "Ngid" => "0", "Pid" => "3536121", "PPid" => "3536069", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "3536121\t1", "NSpid" => "3536121\t1", "NSpgid" => "3536121\t1", "NSsid" => "3536121\t1", "VmPeak" => "747724 kB", "VmSize" => "747724 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "42756 kB", "VmRSS" => "42756 kB", "RssAnon" => "12428 kB", "RssFile" => "30328 kB", "RssShmem" => "0 kB", "VmData" => "107912 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "196 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "19", "SigQ" => "4/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2699", "nonvoluntary_ctxt_switches" => "17"} [2025-09-22 02:51:45] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:51:45] INFO -- CNTI: cmdline_by_pid [2025-09-22 02:51:45] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:45] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:45] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:45] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:45] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:45] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:51:46] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-09-22 02:51:46] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-09-22 02:51:46] DEBUG -- CNTI-proctree_by_pid: current_pid == potential_parent_pid [2025-09-22 02:51:46] DEBUG -- CNTI: parse_status status_output: Name: sleep State: Z (zombie) Tgid: 3537926 Ngid: 0 Pid: 3537926 PPid: 3536121 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 0 Groups: 0 NStgid: 3537926 38 NSpid: 3537926 38 NSpgid: 3537920 32 NSsid: 3537920 32 Threads: 1 SigQ: 4/256612 SigPnd: 0000000000001000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000000000 CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 2 nonvoluntary_ctxt_switches: 0 [2025-09-22 02:51:46] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "sleep", "State" => "Z (zombie)", "Tgid" => "3537926", "Ngid" => "0", "Pid" => "3537926", "PPid" => "3536121", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "0", "Groups" => "0", "NStgid" => "3537926\t38", "NSpid" => "3537926\t38", "NSpgid" => "3537920\t32", "NSsid" => "3537920\t32", "Threads" => "1", "SigQ" => "4/256612", "SigPnd" => "0000000000001000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2", "nonvoluntary_ctxt_switches" => "0"} [2025-09-22 02:51:46] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:51:46] DEBUG -- CNTI-proctree_by_pid: proctree_by_pid ppid == pid && ppid != current_pid [2025-09-22 02:51:46] INFO -- CNTI: cmdline_by_pid [2025-09-22 02:51:46] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:46] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:46] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:46] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:46] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:46] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:51:46] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} [2025-09-22 02:51:46] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "", error: ""} [2025-09-22 02:51:46] DEBUG -- CNTI-proctree_by_pid: Matched descendent cmdline [2025-09-22 02:51:46] INFO -- CNTI-proctree_by_pid: proctree_by_pid potential_parent_pid: 3537926 [2025-09-22 02:51:46] DEBUG -- CNTI-proctree_by_pid: proc_statuses: ["Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536121\nNgid:\t0\nPid:\t3536121\nPPid:\t3536069\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3536121\t1\nNSpid:\t3536121\t1\nNSpgid:\t3536121\t1\nNSsid:\t3536121\t1\nVmPeak:\t 747724 kB\nVmSize:\t 747724 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 42756 kB\nVmRSS:\t 42756 kB\nRssAnon:\t 12428 kB\nRssFile:\t 30328 kB\nRssShmem:\t 0 kB\nVmData:\t 107912 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 196 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t19\nSigQ:\t4/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2699\nnonvoluntary_ctxt_switches:\t17\n", "Name:\tsleep\nState:\tZ (zombie)\nTgid:\t3537926\nNgid:\t0\nPid:\t3537926\nPPid:\t3536121\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t0\nGroups:\t0 \nNStgid:\t3537926\t38\nNSpid:\t3537926\t38\nNSpgid:\t3537920\t32\nNSsid:\t3537920\t32\nThreads:\t1\nSigQ:\t4/256612\nSigPnd:\t0000000000001000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2\nnonvoluntary_ctxt_switches:\t0\n"] [2025-09-22 02:51:46] DEBUG -- CNTI: parse_status status_output: Name: coredns Umask: 0022 State: S (sleeping) Tgid: 3536121 Ngid: 0 Pid: 3536121 PPid: 3536069 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 3536121 1 NSpid: 3536121 1 NSpgid: 3536121 1 NSsid: 3536121 1 VmPeak: 747724 kB VmSize: 747724 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 42756 kB VmRSS: 42756 kB RssAnon: 12428 kB RssFile: 30328 kB RssShmem: 0 kB VmData: 107912 kB VmStk: 132 kB VmExe: 22032 kB VmLib: 8 kB VmPTE: 196 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 19 SigQ: 4/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffe7fc1feff CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 2699 nonvoluntary_ctxt_switches: 17 [2025-09-22 02:51:46] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "3536121", "Ngid" => "0", "Pid" => "3536121", "PPid" => "3536069", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "3536121\t1", "NSpid" => "3536121\t1", "NSpgid" => "3536121\t1", "NSsid" => "3536121\t1", "VmPeak" => "747724 kB", "VmSize" => "747724 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "42756 kB", "VmRSS" => "42756 kB", "RssAnon" => "12428 kB", "RssFile" => "30328 kB", "RssShmem" => "0 kB", "VmData" => "107912 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "196 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "19", "SigQ" => "4/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2699", "nonvoluntary_ctxt_switches" => "17"} [2025-09-22 02:51:46] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:51:46] DEBUG -- CNTI: parse_status status_output: Name: sleep State: Z (zombie) Tgid: 3537926 Ngid: 0 Pid: 3537926 PPid: 3536121 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 0 Groups: 0 NStgid: 3537926 38 NSpid: 3537926 38 NSpgid: 3537920 32 NSsid: 3537920 32 Threads: 1 SigQ: 4/256612 SigPnd: 0000000000001000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000000000 CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 2 nonvoluntary_ctxt_switches: 0 [2025-09-22 02:51:46] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "sleep", "State" => "Z (zombie)", "Tgid" => "3537926", "Ngid" => "0", "Pid" => "3537926", "PPid" => "3536121", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "0", "Groups" => "0", "NStgid" => "3537926\t38", "NSpid" => "3537926\t38", "NSpgid" => "3537920\t32", "NSsid" => "3537920\t32", "Threads" => "1", "SigQ" => "4/256612", "SigPnd" => "0000000000001000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2", "nonvoluntary_ctxt_switches" => "0"} [2025-09-22 02:51:46] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:51:46] INFO -- CNTI: cmdline_by_pid [2025-09-22 02:51:46] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:46] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:46] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:46] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:46] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:46] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg Process sleep in container fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9 of pod coredns-coredns-64fc886fd4-5bf6l has a state of Z (zombie) [2025-09-22 02:51:46] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} [2025-09-22 02:51:46] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "", error: ""} [2025-09-22 02:51:46] DEBUG -- CNTI-proctree_by_pid: current_pid == potential_parent_pid [2025-09-22 02:51:46] DEBUG -- CNTI-proctree_by_pid: proctree: [{"Name" => "sleep", "State" => "Z (zombie)", "Tgid" => "3537926", "Ngid" => "0", "Pid" => "3537926", "PPid" => "3536121", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "0", "Groups" => "0", "NStgid" => "3537926\t38", "NSpid" => "3537926\t38", "NSpgid" => "3537920\t32", "NSsid" => "3537920\t32", "Threads" => "1", "SigQ" => "4/256612", "SigPnd" => "0000000000001000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2", "nonvoluntary_ctxt_switches" => "0", "cmdline" => ""}] [2025-09-22 02:51:46] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:51:46] DEBUG -- CNTI-proctree_by_pid: proctree: [{"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "3536121", "Ngid" => "0", "Pid" => "3536121", "PPid" => "3536069", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "3536121\t1", "NSpid" => "3536121\t1", "NSpgid" => "3536121\t1", "NSsid" => "3536121\t1", "VmPeak" => "747724 kB", "VmSize" => "747724 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "42756 kB", "VmRSS" => "42756 kB", "RssAnon" => "12428 kB", "RssFile" => "30328 kB", "RssShmem" => "0 kB", "VmData" => "107912 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "196 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "19", "SigQ" => "4/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2699", "nonvoluntary_ctxt_switches" => "17", "cmdline" => "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000"}, {"Name" => "sleep", "State" => "Z (zombie)", "Tgid" => "3537926", "Ngid" => "0", "Pid" => "3537926", "PPid" => "3536121", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "0", "Groups" => "0", "NStgid" => "3537926\t38", "NSpid" => "3537926\t38", "NSpgid" => "3537920\t32", "NSsid" => "3537920\t32", "Threads" => "1", "SigQ" => "4/256612", "SigPnd" => "0000000000001000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2", "nonvoluntary_ctxt_switches" => "0", "cmdline" => ""}] [2025-09-22 02:51:46] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:51:46] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:51:46] DEBUG -- CNTI-zombie_handled: status: {"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "3536121", "Ngid" => "0", "Pid" => "3536121", "PPid" => "3536069", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "3536121\t1", "NSpid" => "3536121\t1", "NSpgid" => "3536121\t1", "NSsid" => "3536121\t1", "VmPeak" => "747724 kB", "VmSize" => "747724 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "42756 kB", "VmRSS" => "42756 kB", "RssAnon" => "12428 kB", "RssFile" => "30328 kB", "RssShmem" => "0 kB", "VmData" => "107912 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "196 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "19", "SigQ" => "4/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2699", "nonvoluntary_ctxt_switches" => "17", "cmdline" => "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000"} [2025-09-22 02:51:46] INFO -- CNTI-zombie_handled: status cmdline: /coredns-conf/etc/coredns/Corefile [2025-09-22 02:51:46] INFO -- CNTI-zombie_handled: pid: 3536121 [2025-09-22 02:51:46] INFO -- CNTI-zombie_handled: status name: coredns [2025-09-22 02:51:46] INFO -- CNTI-zombie_handled: state: S (sleeping) [2025-09-22 02:51:46] INFO -- CNTI-zombie_handled: (state =~ /zombie/): [2025-09-22 02:51:46] DEBUG -- CNTI-zombie_handled: status: {"Name" => "sleep", "State" => "Z (zombie)", "Tgid" => "3537926", "Ngid" => "0", "Pid" => "3537926", "PPid" => "3536121", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "0", "Groups" => "0", "NStgid" => "3537926\t38", "NSpid" => "3537926\t38", "NSpgid" => "3537920\t32", "NSsid" => "3537920\t32", "Threads" => "1", "SigQ" => "4/256612", "SigPnd" => "0000000000001000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2", "nonvoluntary_ctxt_switches" => "0", "cmdline" => ""} [2025-09-22 02:51:46] INFO -- CNTI-zombie_handled: status cmdline: [2025-09-22 02:51:46] INFO -- CNTI-zombie_handled: pid: 3537926 [2025-09-22 02:51:46] INFO -- CNTI-zombie_handled: status name: sleep [2025-09-22 02:51:46] INFO -- CNTI-zombie_handled: state: Z (zombie) [2025-09-22 02:51:46] INFO -- CNTI-zombie_handled: (state =~ /zombie/): 3 [2025-09-22 02:51:46] INFO -- CNTI-zombie_handled: zombies.all?(nil): false [2025-09-22 02:51:46] INFO -- CNTI: container_status_result.all?(true): false [2025-09-22 02:51:46] INFO -- CNTI: pod_resp.all?(true): false [2025-09-22 02:51:46] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: false [2025-09-22 02:51:46] INFO -- CNTI-zombie_handled: Shutting down container fd5d8bcdd87d1cf1bcb29af76027cac6ef7005a2f82060bcc5d824c261d820e9 [2025-09-22 02:51:46] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:51:46] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:51:46] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:51:46] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:51:46] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:51:46] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:51:47] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} [2025-09-22 02:52:07] INFO -- CNTI-zombie_handled: Waiting for pod coredns-coredns-64fc886fd4-5bf6l in namespace cnf-default to become Ready... [2025-09-22 02:52:07] INFO -- CNTI-KubectlClient.wait.wait_for_resource_availability: Waiting for pod/coredns-coredns-64fc886fd4-5bf6l to be available [2025-09-22 02:52:07] INFO -- CNTI-KubectlClient.wait.wait_for_resource_availability: seconds elapsed while waiting: 0 [2025-09-22 02:52:10] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource pod/coredns-coredns-64fc886fd4-5bf6l is ready [2025-09-22 02:52:10] DEBUG -- CNTI-KubectlClient.Get.pod_status: Get status of pod/coredns-coredns-64fc886fd4-5bf6l* with field selector: [2025-09-22 02:52:10] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods ✖️ 🏆FAILED: [zombie_handled] Zombie not handled ⚖👀 [2025-09-22 02:52:10] INFO -- CNTI-KubectlClient.Get.pod_status: 'Ready' pods: coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:52:10] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'zombie_handled' emoji: ⚖👀 [2025-09-22 02:52:10] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'zombie_handled' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:52:10] DEBUG -- CNTI-CNFManager.Points: Task: 'zombie_handled' type: essential [2025-09-22 02:52:10] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 0 points [2025-09-22 02:52:10] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'zombie_handled' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:52:10] DEBUG -- CNTI-CNFManager.Points: Task: 'zombie_handled' type: essential [2025-09-22 02:52:10] DEBUG -- CNTI-CNFManager.Points.upsert_task-zombie_handled: Task start time: 2025-09-22 02:51:29 UTC, end time: 2025-09-22 02:52:10 UTC [2025-09-22 02:52:10] INFO -- CNTI-CNFManager.Points.upsert_task-zombie_handled: Task: 'zombie_handled' has status: 'failed' and is awarded: 0 points.Runtime: 00:00:40.826516283 [2025-09-22 02:52:10] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:52:10] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:52:10] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:52:10] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:52:10] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:52:10] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:52:10] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:52:10] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:52:10] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [sig_term_handled] [2025-09-22 02:52:10] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:52:10] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:52:10] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:52:10] INFO -- CNTI-CNFManager.Task.task_runner.sig_term_handled: Starting test [2025-09-22 02:52:10] INFO -- CNTI-CNFManager.workload_resource_test: Starting test [2025-09-22 02:52:10] INFO -- CNTI-CNFManager.resource_refs: Yielding resources: ["replicaset", "deployment", "statefulset", "pod", "daemonset"] [2025-09-22 02:52:10] DEBUG -- CNTI-CNFManager.cnf_resources: Map block to CNF resources [2025-09-22 02:52:10] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:52:10] DEBUG -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-09-22 02:52:10] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-09-22 02:52:10] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:52:10] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-09-22 02:52:10] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:52:10] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:52:10] DEBUG -- CNTI-KubectlClient.Get.pods_by_resource_labels: Creating list of pods by resource: Deployment/coredns-coredns labels [2025-09-22 02:52:10] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:10] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-09-22 02:52:10] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:52:11] DEBUG -- CNTI-KubectlClient.Get.pods_by_labels: Creating list of pods that have labels: {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"} [2025-09-22 02:52:11] INFO -- CNTI-KubectlClient.Get.pods_by_labels: Matched 1 pods: coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:52:11] INFO -- CNTI-KubectlClient.wait.wait_for_resource_availability: Waiting for pod/coredns-coredns-64fc886fd4-5bf6l to be available [2025-09-22 02:52:11] INFO -- CNTI-KubectlClient.wait.wait_for_resource_availability: seconds elapsed while waiting: 0 [2025-09-22 02:52:14] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource pod/coredns-coredns-64fc886fd4-5bf6l is ready [2025-09-22 02:52:14] DEBUG -- CNTI-KubectlClient.Get.pod_status: Get status of pod/coredns-coredns-64fc886fd4-5bf6l* with field selector: [2025-09-22 02:52:14] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:14] INFO -- CNTI-KubectlClient.Get.pod_status: 'Ready' pods: coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:52:14] DEBUG -- CNTI-KubectlClient.Get.nodes_by_pod: Finding nodes with pod/coredns-coredns-64fc886fd4-5bf6l [2025-09-22 02:52:14] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-09-22 02:52:14] INFO -- CNTI-KubectlClient.Get.nodes_by_pod: Nodes with pod/coredns-coredns-64fc886fd4-5bf6l list: latest-worker2 [2025-09-22 02:52:14] INFO -- CNTI: node_pid_by_container_id container_id: containerd://c7b711844c7cfcad9041cb1452431531b6ab1b5668bfba908a128401e4dc5c6c [2025-09-22 02:52:14] INFO -- CNTI: parse_container_id container_id: containerd://c7b711844c7cfcad9041cb1452431531b6ab1b5668bfba908a128401e4dc5c6c [2025-09-22 02:52:14] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:14] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:14] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:14] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:14] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:14] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:14] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: time="2025-09-22T02:52:14Z" level=warning msg="Config \"/etc/crictl.yaml\" does not exist, trying next: \"/usr/local/bin/crictl.yaml\"" time="2025-09-22T02:52:14Z" level=warning msg="runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead." [2025-09-22 02:52:14] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "{\n \"info\": {\n \"config\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"1\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"envs\": [\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP\",\n \"value\": \"udp://10.96.234.94:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PROTO\",\n \"value\": \"udp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP\",\n \"value\": \"tcp://10.96.234.94:53\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_HOST\",\n \"value\": \"10.96.234.94\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_UDP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_ADDR\",\n \"value\": \"10.96.234.94\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_ADDR\",\n \"value\": \"10.96.234.94\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"KUBERNETES_PORT\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_TCP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT\",\n \"value\": \"udp://10.96.234.94:53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT_HTTPS\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_ADDR\",\n \"value\": \"10.96.0.1\"\n }\n ],\n \"image\": {\n \"image\": \"sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f\",\n \"user_specified_image\": \"coredns/coredns:1.7.1\"\n },\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-5bf6l\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"0412b191-f9bf-4fc1-904b-bf7034dffe7e\"\n },\n \"linux\": {\n \"resources\": {\n \"cpu_period\": 100000,\n \"cpu_quota\": 10000,\n \"cpu_shares\": 102,\n \"hugepage_limits\": [\n {\n \"page_size\": \"2MB\"\n },\n {\n \"page_size\": \"1GB\"\n }\n ],\n \"memory_limit_in_bytes\": 134217728,\n \"memory_swap_limit_in_bytes\": 134217728,\n \"oom_score_adj\": -997\n },\n \"security_context\": {\n \"masked_paths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespace_options\": {\n \"pid\": 1,\n \"userns_options\": {\n \"mode\": 2\n }\n },\n \"readonly_paths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"run_as_user\": {},\n \"seccomp\": {\n \"profile_type\": 1\n }\n }\n },\n \"log_path\": \"coredns/1.log\",\n \"metadata\": {\n \"attempt\": 1,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"container_path\": \"/etc/coredns\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/etc/hosts\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts\"\n },\n {\n \"container_path\": \"/dev/termination-log\",\n \"host_path\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/56bc32d4\"\n }\n ]\n },\n \"pid\": 3538155,\n \"removing\": false,\n \"runtimeOptions\": {\n \"systemd_cgroup\": true\n },\n \"runtimeSpec\": {\n \"annotations\": {\n \"io.kubernetes.cri.container-name\": \"coredns\",\n \"io.kubernetes.cri.container-type\": \"container\",\n \"io.kubernetes.cri.image-name\": \"coredns/coredns:1.7.1\",\n \"io.kubernetes.cri.sandbox-id\": \"240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e\",\n \"io.kubernetes.cri.sandbox-name\": \"coredns-coredns-64fc886fd4-5bf6l\",\n \"io.kubernetes.cri.sandbox-namespace\": \"cnf-default\",\n \"io.kubernetes.cri.sandbox-uid\": \"0412b191-f9bf-4fc1-904b-bf7034dffe7e\"\n },\n \"hooks\": {\n \"createContainer\": [\n {\n \"path\": \"/kind/bin/mount-product-files.sh\"\n }\n ]\n },\n \"linux\": {\n \"cgroupsPath\": \"kubelet-kubepods-pod0412b191_f9bf_4fc1_904b_bf7034dffe7e.slice:cri-containerd:c7b711844c7cfcad9041cb1452431531b6ab1b5668bfba908a128401e4dc5c6c\",\n \"maskedPaths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespaces\": [\n {\n \"type\": \"pid\"\n },\n {\n \"path\": \"/proc/3536096/ns/ipc\",\n \"type\": \"ipc\"\n },\n {\n \"path\": \"/proc/3536096/ns/uts\",\n \"type\": \"uts\"\n },\n {\n \"type\": \"mount\"\n },\n {\n \"path\": \"/proc/3536096/ns/net\",\n \"type\": \"network\"\n }\n ],\n \"readonlyPaths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"resources\": {\n \"cpu\": {\n \"period\": 100000,\n \"quota\": 10000,\n \"shares\": 102\n },\n \"devices\": [\n {\n \"access\": \"rwm\",\n \"allow\": false\n }\n ],\n \"memory\": {\n \"limit\": 134217728,\n \"swap\": 134217728\n }\n }\n },\n \"mounts\": [\n {\n \"destination\": \"/proc\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"proc\",\n \"type\": \"proc\"\n },\n {\n \"destination\": \"/dev\",\n \"options\": [\n \"nosuid\",\n \"strictatime\",\n \"mode=755\",\n \"size=65536k\"\n ],\n \"source\": \"tmpfs\",\n \"type\": \"tmpfs\"\n },\n {\n \"destination\": \"/dev/pts\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"newinstance\",\n \"ptmxmode=0666\",\n \"mode=0620\",\n \"gid=5\"\n ],\n \"source\": \"devpts\",\n \"type\": \"devpts\"\n },\n {\n \"destination\": \"/dev/mqueue\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"mqueue\",\n \"type\": \"mqueue\"\n },\n {\n \"destination\": \"/sys\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"ro\"\n ],\n \"source\": \"sysfs\",\n \"type\": \"sysfs\"\n },\n {\n \"destination\": \"/sys/fs/cgroup\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"relatime\",\n \"ro\"\n ],\n \"source\": \"cgroup\",\n \"type\": \"cgroup\"\n },\n {\n \"destination\": \"/etc/coredns\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hosts\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/termination-log\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/56bc32d4\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hostname\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/hostname\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/resolv.conf\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/resolv.conf\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/shm\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/run/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/shm\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw\",\n \"type\": \"bind\"\n }\n ],\n \"ociVersion\": \"1.2.1\",\n \"process\": {\n \"args\": [\n \"/coredns\",\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"capabilities\": {\n \"bounding\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"effective\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"permitted\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ]\n },\n \"cwd\": \"/\",\n \"env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\n \"HOSTNAME=coredns-coredns-64fc886fd4-5bf6l\",\n \"COREDNS_COREDNS_SERVICE_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_UDP=udp://10.96.234.94:53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PROTO=udp\",\n \"COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.234.94:53\",\n \"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\n \"COREDNS_COREDNS_SERVICE_HOST=10.96.234.94\",\n \"COREDNS_COREDNS_SERVICE_PORT_UDP_53=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.234.94\",\n \"COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.234.94\",\n \"KUBERNETES_SERVICE_PORT=443\",\n \"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\",\n \"KUBERNETES_PORT_443_TCP_PORT=443\",\n \"COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp\",\n \"COREDNS_COREDNS_PORT_53_TCP_PORT=53\",\n \"KUBERNETES_PORT=tcp://10.96.0.1:443\",\n \"COREDNS_COREDNS_SERVICE_PORT_TCP_53=53\",\n \"COREDNS_COREDNS_PORT=udp://10.96.234.94:53\",\n \"KUBERNETES_SERVICE_HOST=10.96.0.1\",\n \"KUBERNETES_SERVICE_PORT_HTTPS=443\",\n \"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\"\n ],\n \"oomScoreAdj\": -997,\n \"user\": {\n \"additionalGids\": [\n 0\n ],\n \"gid\": 0,\n \"uid\": 0\n }\n },\n \"root\": {\n \"path\": \"rootfs\"\n }\n },\n \"runtimeType\": \"io.containerd.runc.v2\",\n \"sandboxID\": \"240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e\",\n \"snapshotKey\": \"c7b711844c7cfcad9041cb1452431531b6ab1b5668bfba908a128401e4dc5c6c\",\n \"snapshotter\": \"overlayfs\"\n },\n \"status\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"1\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"createdAt\": \"2025-09-22T02:51:47.711334548Z\",\n \"exitCode\": 0,\n \"finishedAt\": \"0001-01-01T00:00:00Z\",\n \"id\": \"c7b711844c7cfcad9041cb1452431531b6ab1b5668bfba908a128401e4dc5c6c\",\n \"image\": {\n \"annotations\": {},\n \"image\": \"docker.io/coredns/coredns:1.7.1\",\n \"runtimeHandler\": \"\",\n \"userSpecifiedImage\": \"\"\n },\n \"imageId\": \"\",\n \"imageRef\": \"docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef\",\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-5bf6l\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"0412b191-f9bf-4fc1-904b-bf7034dffe7e\"\n },\n \"logPath\": \"/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-5bf6l_0412b191-f9bf-4fc1-904b-bf7034dffe7e/coredns/1.log\",\n \"message\": \"\",\n \"metadata\": {\n \"attempt\": 1,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"containerPath\": \"/etc/coredns\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/etc/hosts\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/dev/termination-log\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/56bc32d4\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n }\n ],\n \"reason\": \"\",\n \"resources\": {\n \"linux\": {\n \"cpuPeriod\": \"100000\",\n \"cpuQuota\": \"10000\",\n \"cpuShares\": \"102\",\n \"cpusetCpus\": \"\",\n \"cpusetMems\": \"\",\n \"hugepageLimits\": [],\n \"memoryLimitInBytes\": \"134217728\",\n \"memorySwapLimitInBytes\": \"134217728\",\n \"oomScoreAdj\": \"-997\",\n \"unified\": {}\n }\n },\n \"startedAt\": \"2025-09-22T02:51:49.164427036Z\",\n \"state\": \"CONTAINER_RUNNING\",\n \"user\": {\n \"linux\": {\n \"gid\": \"0\",\n \"supplementalGroups\": [\n \"0\"\n ],\n \"uid\": \"0\"\n }\n }\n }\n}\n", error: "time=\"2025-09-22T02:52:14Z\" level=warning msg=\"Config \\\"/etc/crictl.yaml\\\" does not exist, trying next: \\\"/usr/local/bin/crictl.yaml\\\"\"\ntime=\"2025-09-22T02:52:14Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\n"} [2025-09-22 02:52:14] DEBUG -- CNTI: node_pid_by_container_id inspect: { "info": { "config": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "1", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "args": [ "-conf", "/etc/coredns/Corefile" ], "envs": [ { "key": "COREDNS_COREDNS_SERVICE_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP", "value": "udp://10.96.234.94:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PROTO", "value": "udp" }, { "key": "COREDNS_COREDNS_PORT_53_TCP", "value": "tcp://10.96.234.94:53" }, { "key": "KUBERNETES_PORT_443_TCP_PROTO", "value": "tcp" }, { "key": "COREDNS_COREDNS_SERVICE_HOST", "value": "10.96.234.94" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_UDP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_ADDR", "value": "10.96.234.94" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_ADDR", "value": "10.96.234.94" }, { "key": "KUBERNETES_SERVICE_PORT", "value": "443" }, { "key": "KUBERNETES_PORT_443_TCP", "value": "tcp://10.96.0.1:443" }, { "key": "KUBERNETES_PORT_443_TCP_PORT", "value": "443" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PROTO", "value": "tcp" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PORT", "value": "53" }, { "key": "KUBERNETES_PORT", "value": "tcp://10.96.0.1:443" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_TCP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT", "value": "udp://10.96.234.94:53" }, { "key": "KUBERNETES_SERVICE_HOST", "value": "10.96.0.1" }, { "key": "KUBERNETES_SERVICE_PORT_HTTPS", "value": "443" }, { "key": "KUBERNETES_PORT_443_TCP_ADDR", "value": "10.96.0.1" } ], "image": { "image": "sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f", "user_specified_image": "coredns/coredns:1.7.1" }, "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-5bf6l", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "0412b191-f9bf-4fc1-904b-bf7034dffe7e" }, "linux": { "resources": { "cpu_period": 100000, "cpu_quota": 10000, "cpu_shares": 102, "hugepage_limits": [ { "page_size": "2MB" }, { "page_size": "1GB" } ], "memory_limit_in_bytes": 134217728, "memory_swap_limit_in_bytes": 134217728, "oom_score_adj": -997 }, "security_context": { "masked_paths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespace_options": { "pid": 1, "userns_options": { "mode": 2 } }, "readonly_paths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "run_as_user": {}, "seccomp": { "profile_type": 1 } } }, "log_path": "coredns/1.log", "metadata": { "attempt": 1, "name": "coredns" }, "mounts": [ { "container_path": "/etc/coredns", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume", "readonly": true }, { "container_path": "/var/run/secrets/kubernetes.io/serviceaccount", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw", "readonly": true }, { "container_path": "/etc/hosts", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts" }, { "container_path": "/dev/termination-log", "host_path": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/56bc32d4" } ] }, "pid": 3538155, "removing": false, "runtimeOptions": { "systemd_cgroup": true }, "runtimeSpec": { "annotations": { "io.kubernetes.cri.container-name": "coredns", "io.kubernetes.cri.container-type": "container", "io.kubernetes.cri.image-name": "coredns/coredns:1.7.1", "io.kubernetes.cri.sandbox-id": "240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e", "io.kubernetes.cri.sandbox-name": "coredns-coredns-64fc886fd4-5bf6l", "io.kubernetes.cri.sandbox-namespace": "cnf-default", "io.kubernetes.cri.sandbox-uid": "0412b191-f9bf-4fc1-904b-bf7034dffe7e" }, "hooks": { "createContainer": [ { "path": "/kind/bin/mount-product-files.sh" } ] }, "linux": { "cgroupsPath": "kubelet-kubepods-pod0412b191_f9bf_4fc1_904b_bf7034dffe7e.slice:cri-containerd:c7b711844c7cfcad9041cb1452431531b6ab1b5668bfba908a128401e4dc5c6c", "maskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespaces": [ { "type": "pid" }, { "path": "/proc/3536096/ns/ipc", "type": "ipc" }, { "path": "/proc/3536096/ns/uts", "type": "uts" }, { "type": "mount" }, { "path": "/proc/3536096/ns/net", "type": "network" } ], "readonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "resources": { "cpu": { "period": 100000, "quota": 10000, "shares": 102 }, "devices": [ { "access": "rwm", "allow": false } ], "memory": { "limit": 134217728, "swap": 134217728 } } }, "mounts": [ { "destination": "/proc", "options": [ "nosuid", "noexec", "nodev" ], "source": "proc", "type": "proc" }, { "destination": "/dev", "options": [ "nosuid", "strictatime", "mode=755", "size=65536k" ], "source": "tmpfs", "type": "tmpfs" }, { "destination": "/dev/pts", "options": [ "nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5" ], "source": "devpts", "type": "devpts" }, { "destination": "/dev/mqueue", "options": [ "nosuid", "noexec", "nodev" ], "source": "mqueue", "type": "mqueue" }, { "destination": "/sys", "options": [ "nosuid", "noexec", "nodev", "ro" ], "source": "sysfs", "type": "sysfs" }, { "destination": "/sys/fs/cgroup", "options": [ "nosuid", "noexec", "nodev", "relatime", "ro" ], "source": "cgroup", "type": "cgroup" }, { "destination": "/etc/coredns", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume", "type": "bind" }, { "destination": "/etc/hosts", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts", "type": "bind" }, { "destination": "/dev/termination-log", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/56bc32d4", "type": "bind" }, { "destination": "/etc/hostname", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/hostname", "type": "bind" }, { "destination": "/etc/resolv.conf", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/resolv.conf", "type": "bind" }, { "destination": "/dev/shm", "options": [ "rbind", "rprivate", "rw" ], "source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e/shm", "type": "bind" }, { "destination": "/var/run/secrets/kubernetes.io/serviceaccount", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw", "type": "bind" } ], "ociVersion": "1.2.1", "process": { "args": [ "/coredns", "-conf", "/etc/coredns/Corefile" ], "capabilities": { "bounding": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "effective": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "permitted": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ] }, "cwd": "/", "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME=coredns-coredns-64fc886fd4-5bf6l", "COREDNS_COREDNS_SERVICE_PORT=53", "COREDNS_COREDNS_PORT_53_UDP=udp://10.96.234.94:53", "COREDNS_COREDNS_PORT_53_UDP_PROTO=udp", "COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.234.94:53", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "COREDNS_COREDNS_SERVICE_HOST=10.96.234.94", "COREDNS_COREDNS_SERVICE_PORT_UDP_53=53", "COREDNS_COREDNS_PORT_53_UDP_PORT=53", "COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.234.94", "COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.234.94", "KUBERNETES_SERVICE_PORT=443", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP_PORT=443", "COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp", "COREDNS_COREDNS_PORT_53_TCP_PORT=53", "KUBERNETES_PORT=tcp://10.96.0.1:443", "COREDNS_COREDNS_SERVICE_PORT_TCP_53=53", "COREDNS_COREDNS_PORT=udp://10.96.234.94:53", "KUBERNETES_SERVICE_HOST=10.96.0.1", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1" ], "oomScoreAdj": -997, "user": { "additionalGids": [ 0 ], "gid": 0, "uid": 0 } }, "root": { "path": "rootfs" } }, "runtimeType": "io.containerd.runc.v2", "sandboxID": "240470734f5100c3bbbec88eb830ecdbc51f7696bab8901a9fe02d7d50322c2e", "snapshotKey": "c7b711844c7cfcad9041cb1452431531b6ab1b5668bfba908a128401e4dc5c6c", "snapshotter": "overlayfs" }, "status": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "1", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "createdAt": "2025-09-22T02:51:47.711334548Z", "exitCode": 0, "finishedAt": "0001-01-01T00:00:00Z", "id": "c7b711844c7cfcad9041cb1452431531b6ab1b5668bfba908a128401e4dc5c6c", "image": { "annotations": {}, "image": "docker.io/coredns/coredns:1.7.1", "runtimeHandler": "", "userSpecifiedImage": "" }, "imageId": "", "imageRef": "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-5bf6l", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "0412b191-f9bf-4fc1-904b-bf7034dffe7e" }, "logPath": "/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-5bf6l_0412b191-f9bf-4fc1-904b-bf7034dffe7e/coredns/1.log", "message": "", "metadata": { "attempt": 1, "name": "coredns" }, "mounts": [ { "containerPath": "/etc/coredns", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~configmap/config-volume", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/var/run/secrets/kubernetes.io/serviceaccount", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/volumes/kubernetes.io~projected/kube-api-access-ls7sw", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/etc/hosts", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/etc-hosts", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/dev/termination-log", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/0412b191-f9bf-4fc1-904b-bf7034dffe7e/containers/coredns/56bc32d4", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] } ], "reason": "", "resources": { "linux": { "cpuPeriod": "100000", "cpuQuota": "10000", "cpuShares": "102", "cpusetCpus": "", "cpusetMems": "", "hugepageLimits": [], "memoryLimitInBytes": "134217728", "memorySwapLimitInBytes": "134217728", "oomScoreAdj": "-997", "unified": {} } }, "startedAt": "2025-09-22T02:51:49.164427036Z", "state": "CONTAINER_RUNNING", "user": { "linux": { "gid": "0", "supplementalGroups": [ "0" ], "uid": "0" } } } } [2025-09-22 02:52:14] INFO -- CNTI: node_pid_by_container_id pid: 3538155 [2025-09-22 02:52:14] INFO -- CNTI: pids [2025-09-22 02:52:14] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:14] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:14] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:14] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:14] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:14] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:15] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "1\n162\n1756\n184\n312\n3535941\n3535966\n3535993\n3536069\n3536096\n3536721\n3536747\n3536773\n3538155\n3538235\n392\n410\n447\n456\n506\n606\n805\n830\n862\nacpi\nbootconfig\nbuddyinfo\nbus\ncgroups\ncmdline\nconsoles\ncpuinfo\ncrypto\ndevices\ndiskstats\ndma\ndriver\ndynamic_debug\nexecdomains\nfb\nfilesystems\nfs\ninterrupts\niomem\nioports\nirq\nkallsyms\nkcore\nkey-users\nkeys\nkmsg\nkpagecgroup\nkpagecount\nkpageflags\nloadavg\nlocks\nmdstat\nmeminfo\nmisc\nmodules\nmounts\nmtrr\nnet\npagetypeinfo\npartitions\npressure\nschedstat\nscsi\nself\nslabinfo\nsoftirqs\nstat\nswaps\nsys\nsysrq-trigger\nsysvipc\nthread-self\ntimer_list\ntty\nuptime\nversion\nversion_signature\nvmallocinfo\nvmstat\nzoneinfo\n", error: ""} [2025-09-22 02:52:15] INFO -- CNTI: pids ls_proc: {status: Process::Status[0], output: "1\n162\n1756\n184\n312\n3535941\n3535966\n3535993\n3536069\n3536096\n3536721\n3536747\n3536773\n3538155\n3538235\n392\n410\n447\n456\n506\n606\n805\n830\n862\nacpi\nbootconfig\nbuddyinfo\nbus\ncgroups\ncmdline\nconsoles\ncpuinfo\ncrypto\ndevices\ndiskstats\ndma\ndriver\ndynamic_debug\nexecdomains\nfb\nfilesystems\nfs\ninterrupts\niomem\nioports\nirq\nkallsyms\nkcore\nkey-users\nkeys\nkmsg\nkpagecgroup\nkpagecount\nkpageflags\nloadavg\nlocks\nmdstat\nmeminfo\nmisc\nmodules\nmounts\nmtrr\nnet\npagetypeinfo\npartitions\npressure\nschedstat\nscsi\nself\nslabinfo\nsoftirqs\nstat\nswaps\nsys\nsysrq-trigger\nsysvipc\nthread-self\ntimer_list\ntty\nuptime\nversion\nversion_signature\nvmallocinfo\nvmstat\nzoneinfo\n", error: ""} [2025-09-22 02:52:15] DEBUG -- CNTI: parse_ls ls: 1 162 1756 184 312 3535941 3535966 3535993 3536069 3536096 3536721 3536747 3536773 3538155 3538235 392 410 447 456 506 606 805 830 862 acpi bootconfig buddyinfo bus cgroups cmdline consoles cpuinfo crypto devices diskstats dma driver dynamic_debug execdomains fb filesystems fs interrupts iomem ioports irq kallsyms kcore key-users keys kmsg kpagecgroup kpagecount kpageflags loadavg locks mdstat meminfo misc modules mounts mtrr net pagetypeinfo partitions pressure schedstat scsi self slabinfo softirqs stat swaps sys sysrq-trigger sysvipc thread-self timer_list tty uptime version version_signature vmallocinfo vmstat zoneinfo [2025-09-22 02:52:15] DEBUG -- CNTI: parse_ls parsed: ["1", "162", "1756", "184", "312", "3535941", "3535966", "3535993", "3536069", "3536096", "3536721", "3536747", "3536773", "3538155", "3538235", "392", "410", "447", "456", "506", "606", "805", "830", "862", "acpi", "bootconfig", "buddyinfo", "bus", "cgroups", "cmdline", "consoles", "cpuinfo", "crypto", "devices", "diskstats", "dma", "driver", "dynamic_debug", "execdomains", "fb", "filesystems", "fs", "interrupts", "iomem", "ioports", "irq", "kallsyms", "kcore", "key-users", "keys", "kmsg", "kpagecgroup", "kpagecount", "kpageflags", "loadavg", "locks", "mdstat", "meminfo", "misc", "modules", "mounts", "mtrr", "net", "pagetypeinfo", "partitions", "pressure", "schedstat", "scsi", "self", "slabinfo", "softirqs", "stat", "swaps", "sys", "sysrq-trigger", "sysvipc", "thread-self", "timer_list", "tty", "uptime", "version", "version_signature", "vmallocinfo", "vmstat", "zoneinfo"] [2025-09-22 02:52:15] DEBUG -- CNTI: pids_from_ls_proc ls: ["1", "162", "1756", "184", "312", "3535941", "3535966", "3535993", "3536069", "3536096", "3536721", "3536747", "3536773", "3538155", "3538235", "392", "410", "447", "456", "506", "606", "805", "830", "862", "acpi", "bootconfig", "buddyinfo", "bus", "cgroups", "cmdline", "consoles", "cpuinfo", "crypto", "devices", "diskstats", "dma", "driver", "dynamic_debug", "execdomains", "fb", "filesystems", "fs", "interrupts", "iomem", "ioports", "irq", "kallsyms", "kcore", "key-users", "keys", "kmsg", "kpagecgroup", "kpagecount", "kpageflags", "loadavg", "locks", "mdstat", "meminfo", "misc", "modules", "mounts", "mtrr", "net", "pagetypeinfo", "partitions", "pressure", "schedstat", "scsi", "self", "slabinfo", "softirqs", "stat", "swaps", "sys", "sysrq-trigger", "sysvipc", "thread-self", "timer_list", "tty", "uptime", "version", "version_signature", "vmallocinfo", "vmstat", "zoneinfo"] [2025-09-22 02:52:15] DEBUG -- CNTI: pids_from_ls_proc pids: ["1", "162", "1756", "184", "312", "3535941", "3535966", "3535993", "3536069", "3536096", "3536721", "3536747", "3536773", "3538155", "3538235", "392", "410", "447", "456", "506", "606", "805", "830", "862"] [2025-09-22 02:52:15] INFO -- CNTI: all_statuses_by_pids [2025-09-22 02:52:15] INFO -- CNTI: all_statuses_by_pids pid: 1 [2025-09-22 02:52:15] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:15] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:15] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:15] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:15] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:15] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:15] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsystemd\nUmask:\t0000\nState:\tS (sleeping)\nTgid:\t1\nNgid:\t0\nPid:\t1\nPPid:\t0\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t1\nNSpid:\t1\nNSpgid:\t1\nNSsid:\t1\nVmPeak:\t 32596 kB\nVmSize:\t 31024 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 23796 kB\nVmRSS:\t 22280 kB\nRssAnon:\t 13496 kB\nRssFile:\t 8784 kB\nRssShmem:\t 0 kB\nVmData:\t 13016 kB\nVmStk:\t 132 kB\nVmExe:\t 40 kB\nVmLib:\t 10688 kB\nVmPTE:\t 92 kB\nVmSwap:\t 128 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t7fe3c0fe28014a03\nSigIgn:\t0000000000001000\nSigCgt:\t00000000000004ec\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t992320\nnonvoluntary_ctxt_switches:\t48660\n", error: ""} [2025-09-22 02:52:15] INFO -- CNTI: all_statuses_by_pids pid: 162 [2025-09-22 02:52:15] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:15] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:15] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:15] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:15] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:15] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:15] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsystemd-journal\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t162\nNgid:\t29234\nPid:\t162\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t162\nNSpid:\t162\nNSpgid:\t162\nNSsid:\t162\nVmPeak:\t 438860 kB\nVmSize:\t 380936 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 295792 kB\nVmRSS:\t 249896 kB\nRssAnon:\t 508 kB\nRssFile:\t 7120 kB\nRssShmem:\t 242268 kB\nVmData:\t 8964 kB\nVmStk:\t 132 kB\nVmExe:\t 92 kB\nVmLib:\t 9736 kB\nVmPTE:\t 660 kB\nVmSwap:\t 632 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000400004a02\nSigIgn:\t0000000000001000\nSigCgt:\t0000000100000040\nCapInh:\t0000000000000000\nCapPrm:\t00000025402800cf\nCapEff:\t00000025402800cf\nCapBnd:\t00000025402800cf\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t20\nSpeculation_Store_Bypass:\tthread force mitigated\nSpeculationIndirectBranch:\tconditional force disabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t686204\nnonvoluntary_ctxt_switches:\t1978\n", error: ""} [2025-09-22 02:52:15] INFO -- CNTI: all_statuses_by_pids pid: 1756 [2025-09-22 02:52:15] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:15] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:15] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:15] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:15] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:15] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:16] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t1756\nNgid:\t0\nPid:\t1756\nPPid:\t862\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t1756\t888\nNSpid:\t1756\t888\nNSpgid:\t862\t1\nNSsid:\t862\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1532 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 20 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 40 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t1\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-09-22 02:52:16] INFO -- CNTI: all_statuses_by_pids pid: 184 [2025-09-22 02:52:16] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:16] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:16] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:16] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:16] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:16] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:16] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t184\nNgid:\t0\nPid:\t184\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t1024\nGroups:\t0 \nNStgid:\t184\nNSpid:\t184\nNSpgid:\t184\nNSsid:\t184\nVmPeak:\t 9187576 kB\nVmSize:\t 8788308 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 176412 kB\nVmRSS:\t 93728 kB\nRssAnon:\t 63700 kB\nRssFile:\t 30028 kB\nRssShmem:\t 0 kB\nVmData:\t 730108 kB\nVmStk:\t 132 kB\nVmExe:\t 18236 kB\nVmLib:\t 1524 kB\nVmPTE:\t 1296 kB\nVmSwap:\t 1332 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t65\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t126\nnonvoluntary_ctxt_switches:\t1\n", error: ""} [2025-09-22 02:52:16] INFO -- CNTI: all_statuses_by_pids pid: 312 [2025-09-22 02:52:16] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:16] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:16] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:16] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:16] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:16] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:16] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tkubelet\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t312\nNgid:\t0\nPid:\t312\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t312\nNSpid:\t312\nNSpgid:\t312\nNSsid:\t312\nVmPeak:\t 8464400 kB\nVmSize:\t 8398864 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 202948 kB\nVmRSS:\t 137384 kB\nRssAnon:\t 90552 kB\nRssFile:\t 46832 kB\nRssShmem:\t 0 kB\nVmData:\t 1013280 kB\nVmStk:\t 132 kB\nVmExe:\t 36928 kB\nVmLib:\t 1560 kB\nVmPTE:\t 1420 kB\nVmSwap:\t 3284 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t95\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba3a00\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t254\nnonvoluntary_ctxt_switches:\t10\n", error: ""} [2025-09-22 02:52:16] INFO -- CNTI: all_statuses_by_pids pid: 3535941 [2025-09-22 02:52:16] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:16] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:16] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:16] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:16] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:16] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:17] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3535941\nNgid:\t0\nPid:\t3535941\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3535941\nNSpid:\t3535941\nNSpgid:\t3535941\nNSsid:\t184\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11688 kB\nVmRSS:\t 11260 kB\nRssAnon:\t 4008 kB\nRssFile:\t 7252 kB\nRssShmem:\t 0 kB\nVmData:\t 45112 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 112 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t35\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-09-22 02:52:17] INFO -- CNTI: all_statuses_by_pids pid: 3535966 [2025-09-22 02:52:17] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:17] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:17] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:17] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:17] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:17] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:17] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3535966\nNgid:\t0\nPid:\t3535966\nPPid:\t3535941\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t3535966\nNSpid:\t3535966\nNSpgid:\t3535966\nNSsid:\t3535966\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 32 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t33\nnonvoluntary_ctxt_switches:\t7\n", error: ""} [2025-09-22 02:52:17] INFO -- CNTI: all_statuses_by_pids pid: 3535993 [2025-09-22 02:52:17] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:17] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:17] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:17] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:17] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:17] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:17] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3535993\nNgid:\t0\nPid:\t3535993\nPPid:\t3535941\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3535993\nNSpid:\t3535993\nNSpgid:\t3535993\nNSsid:\t3535993\nVmPeak:\t 2488 kB\nVmSize:\t 2488 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 928 kB\nVmRSS:\t 928 kB\nRssAnon:\t 88 kB\nRssFile:\t 840 kB\nRssShmem:\t 0 kB\nVmData:\t 224 kB\nVmStk:\t 132 kB\nVmExe:\t 20 kB\nVmLib:\t 1524 kB\nVmPTE:\t 52 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t48\nnonvoluntary_ctxt_switches:\t9\n", error: ""} [2025-09-22 02:52:17] INFO -- CNTI: all_statuses_by_pids pid: 3536069 [2025-09-22 02:52:17] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:17] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:17] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:17] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:17] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:17] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:18] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536069\nNgid:\t0\nPid:\t3536069\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3536069\nNSpid:\t3536069\nNSpgid:\t3536069\nNSsid:\t184\nVmPeak:\t 1234060 kB\nVmSize:\t 1234060 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10732 kB\nVmRSS:\t 10368 kB\nRssAnon:\t 3492 kB\nRssFile:\t 6876 kB\nRssShmem:\t 0 kB\nVmData:\t 45368 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 116 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t10\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-09-22 02:52:18] INFO -- CNTI: all_statuses_by_pids pid: 3536096 [2025-09-22 02:52:18] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:18] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:18] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:18] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:18] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:18] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:18] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536096\nNgid:\t0\nPid:\t3536096\nPPid:\t3536069\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t3536096\t1\nNSpid:\t3536096\t1\nNSpgid:\t3536096\t1\nNSsid:\t3536096\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t523\nnonvoluntary_ctxt_switches:\t12\n", error: ""} [2025-09-22 02:52:18] INFO -- CNTI: all_statuses_by_pids pid: 3536721 [2025-09-22 02:52:18] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:18] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:18] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:18] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:18] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:18] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:18] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536721\nNgid:\t0\nPid:\t3536721\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3536721\nNSpid:\t3536721\nNSpgid:\t3536721\nNSsid:\t184\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11104 kB\nVmRSS:\t 10708 kB\nRssAnon:\t 3392 kB\nRssFile:\t 7316 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 104 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t11\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t7\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-09-22 02:52:18] INFO -- CNTI: all_statuses_by_pids pid: 3536747 [2025-09-22 02:52:18] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:18] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:18] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:18] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:18] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:18] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:19] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536747\nNgid:\t0\nPid:\t3536747\nPPid:\t3536721\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t3536747\t1\nNSpid:\t3536747\t1\nNSpgid:\t3536747\t1\nNSsid:\t3536747\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t25\nnonvoluntary_ctxt_switches:\t7\n", error: ""} [2025-09-22 02:52:19] INFO -- CNTI: all_statuses_by_pids pid: 3536773 [2025-09-22 02:52:19] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:19] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:19] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:19] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:19] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:19] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:19] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tchaos-operator\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536773\nNgid:\t0\nPid:\t3536773\nPPid:\t3536721\nTracerPid:\t0\nUid:\t1000\t1000\t1000\t1000\nGid:\t1000\t1000\t1000\t1000\nFDSize:\t64\nGroups:\t1000 \nNStgid:\t3536773\t1\nNSpid:\t3536773\t1\nNSpgid:\t3536773\t1\nNSsid:\t3536773\t1\nVmPeak:\t 1261676 kB\nVmSize:\t 1261676 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 37628 kB\nVmRSS:\t 37628 kB\nRssAnon:\t 15952 kB\nRssFile:\t 21676 kB\nRssShmem:\t 0 kB\nVmData:\t 66500 kB\nVmStk:\t 132 kB\nVmExe:\t 15232 kB\nVmLib:\t 8 kB\nVmPTE:\t 200 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t34\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t327\nnonvoluntary_ctxt_switches:\t8\n", error: ""} [2025-09-22 02:52:19] INFO -- CNTI: all_statuses_by_pids pid: 3538155 [2025-09-22 02:52:19] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:19] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:19] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:19] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:19] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:19] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:19] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3538155\nNgid:\t0\nPid:\t3538155\nPPid:\t3536069\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3538155\t1\nNSpid:\t3538155\t1\nNSpgid:\t3538155\t1\nNSsid:\t3538155\t1\nVmPeak:\t 747724 kB\nVmSize:\t 747724 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 39544 kB\nVmRSS:\t 39544 kB\nRssAnon:\t 10216 kB\nRssFile:\t 29328 kB\nRssShmem:\t 0 kB\nVmData:\t 107912 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 192 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t16\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t505\nnonvoluntary_ctxt_switches:\t18\n", error: ""} [2025-09-22 02:52:19] INFO -- CNTI: all_statuses_by_pids pid: 3538235 [2025-09-22 02:52:19] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:19] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:19] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:19] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:19] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:19] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:20] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: cat: /proc/3538235/status: No such file or directory command terminated with exit code 1 [2025-09-22 02:52:20] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[1], output: "", error: "cat: /proc/3538235/status: No such file or directory\ncommand terminated with exit code 1\n"} [2025-09-22 02:52:20] INFO -- CNTI: all_statuses_by_pids pid: 392 [2025-09-22 02:52:20] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:20] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:20] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:20] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:20] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:20] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:20] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t392\nNgid:\t0\nPid:\t392\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t392\nNSpid:\t392\nNSpgid:\t392\nNSsid:\t184\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10668 kB\nVmRSS:\t 10132 kB\nRssAnon:\t 3208 kB\nRssFile:\t 6924 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 56 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t8\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-09-22 02:52:20] INFO -- CNTI: all_statuses_by_pids pid: 410 [2025-09-22 02:52:20] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:20] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:20] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:20] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:20] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:20] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:20] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t410\nNgid:\t0\nPid:\t410\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t410\nNSpid:\t410\nNSpgid:\t410\nNSsid:\t184\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10188 kB\nVmRSS:\t 9504 kB\nRssAnon:\t 2720 kB\nRssFile:\t 6784 kB\nRssShmem:\t 0 kB\nVmData:\t 41016 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 584 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t8\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-09-22 02:52:20] INFO -- CNTI: all_statuses_by_pids pid: 447 [2025-09-22 02:52:20] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:20] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:20] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:21] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:21] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:21] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:21] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t447\nNgid:\t0\nPid:\t447\nPPid:\t392\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t447\t1\nNSpid:\t447\t1\nNSpgid:\t447\t1\nNSsid:\t447\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t161\nnonvoluntary_ctxt_switches:\t10\n", error: ""} [2025-09-22 02:52:21] INFO -- CNTI: all_statuses_by_pids pid: 456 [2025-09-22 02:52:21] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:21] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:21] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:21] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:21] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:21] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:21] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t456\nNgid:\t0\nPid:\t456\nPPid:\t410\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t456\t1\nNSpid:\t456\t1\nNSpgid:\t456\t1\nNSsid:\t456\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t23\nnonvoluntary_ctxt_switches:\t7\n", error: ""} [2025-09-22 02:52:21] INFO -- CNTI: all_statuses_by_pids pid: 506 [2025-09-22 02:52:21] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:21] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:21] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:21] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:21] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:21] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:21] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tkube-proxy\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t506\nNgid:\t35458\nPid:\t506\nPPid:\t410\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t506\t1\nNSpid:\t506\t1\nNSpgid:\t506\t1\nNSsid:\t506\t1\nVmPeak:\t 1304656 kB\nVmSize:\t 1304656 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 70872 kB\nVmRSS:\t 34504 kB\nRssAnon:\t 18096 kB\nRssFile:\t 16408 kB\nRssShmem:\t 0 kB\nVmData:\t 76244 kB\nVmStk:\t 132 kB\nVmExe:\t 31876 kB\nVmLib:\t 8 kB\nVmPTE:\t 288 kB\nVmSwap:\t 468 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t36\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t257089\nnonvoluntary_ctxt_switches:\t1225\n", error: ""} [2025-09-22 02:52:21] INFO -- CNTI: all_statuses_by_pids pid: 606 [2025-09-22 02:52:21] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:21] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:21] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:22] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:22] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:22] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:22] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tkindnetd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t606\nNgid:\t35835\nPid:\t606\nPPid:\t392\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t606\t1\nNSpid:\t606\t1\nNSpgid:\t606\t1\nNSsid:\t606\t1\nVmPeak:\t 1284936 kB\nVmSize:\t 1284936 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 52100 kB\nVmRSS:\t 31756 kB\nRssAnon:\t 14008 kB\nRssFile:\t 17748 kB\nRssShmem:\t 0 kB\nVmData:\t 67472 kB\nVmStk:\t 132 kB\nVmExe:\t 25108 kB\nVmLib:\t 8 kB\nVmPTE:\t 256 kB\nVmSwap:\t 272 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t38\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80435fb\nCapEff:\t00000000a80435fb\nCapBnd:\t00000000a80435fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t309672\nnonvoluntary_ctxt_switches:\t10875\n", error: ""} [2025-09-22 02:52:22] INFO -- CNTI: all_statuses_by_pids pid: 805 [2025-09-22 02:52:22] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:22] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:22] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:22] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:22] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:22] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t805\nNgid:\t0\nPid:\t805\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t805\nNSpid:\t805\nNSpgid:\t805\nNSsid:\t184\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11220 kB\nVmRSS:\t 10772 kB\nRssAnon:\t 3276 kB\nRssFile:\t 7496 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 112 kB\nVmSwap:\t 40 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t8\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-09-22 02:52:22] INFO -- CNTI: all_statuses_by_pids pid: 830 [2025-09-22 02:52:22] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:22] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:22] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:22] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:22] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:22] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t830\nNgid:\t0\nPid:\t830\nPPid:\t805\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t830\t1\nNSpid:\t830\t1\nNSpgid:\t830\t1\nNSsid:\t830\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t23\nnonvoluntary_ctxt_switches:\t7\n", error: ""} [2025-09-22 02:52:22] INFO -- CNTI: all_statuses_by_pids pid: 862 [2025-09-22 02:52:22] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:22] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:23] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:23] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:23] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:23] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsh\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t862\nNgid:\t0\nPid:\t862\nPPid:\t805\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t862\t1\nNSpid:\t862\t1\nNSpgid:\t862\t1\nNSsid:\t862\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1564 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 1036 kB\nVmRSS:\t 84 kB\nRssAnon:\t 80 kB\nRssFile:\t 4 kB\nRssShmem:\t 0 kB\nVmData:\t 52 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 44 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000004\nSigCgt:\t0000000000010002\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t905\nnonvoluntary_ctxt_switches:\t12\n", error: ""} [2025-09-22 02:52:23] DEBUG -- CNTI: proc process_statuses_by_node: ["Name:\tsystemd\nUmask:\t0000\nState:\tS (sleeping)\nTgid:\t1\nNgid:\t0\nPid:\t1\nPPid:\t0\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t1\nNSpid:\t1\nNSpgid:\t1\nNSsid:\t1\nVmPeak:\t 32596 kB\nVmSize:\t 31024 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 23796 kB\nVmRSS:\t 22280 kB\nRssAnon:\t 13496 kB\nRssFile:\t 8784 kB\nRssShmem:\t 0 kB\nVmData:\t 13016 kB\nVmStk:\t 132 kB\nVmExe:\t 40 kB\nVmLib:\t 10688 kB\nVmPTE:\t 92 kB\nVmSwap:\t 128 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t7fe3c0fe28014a03\nSigIgn:\t0000000000001000\nSigCgt:\t00000000000004ec\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t992320\nnonvoluntary_ctxt_switches:\t48660\n", "Name:\tsystemd-journal\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t162\nNgid:\t29234\nPid:\t162\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t162\nNSpid:\t162\nNSpgid:\t162\nNSsid:\t162\nVmPeak:\t 438860 kB\nVmSize:\t 380936 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 295792 kB\nVmRSS:\t 249896 kB\nRssAnon:\t 508 kB\nRssFile:\t 7120 kB\nRssShmem:\t 242268 kB\nVmData:\t 8964 kB\nVmStk:\t 132 kB\nVmExe:\t 92 kB\nVmLib:\t 9736 kB\nVmPTE:\t 660 kB\nVmSwap:\t 632 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000400004a02\nSigIgn:\t0000000000001000\nSigCgt:\t0000000100000040\nCapInh:\t0000000000000000\nCapPrm:\t00000025402800cf\nCapEff:\t00000025402800cf\nCapBnd:\t00000025402800cf\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t20\nSpeculation_Store_Bypass:\tthread force mitigated\nSpeculationIndirectBranch:\tconditional force disabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t686204\nnonvoluntary_ctxt_switches:\t1978\n", "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t1756\nNgid:\t0\nPid:\t1756\nPPid:\t862\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t1756\t888\nNSpid:\t1756\t888\nNSpgid:\t862\t1\nNSsid:\t862\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1532 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 20 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 40 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t1\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tcontainerd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t184\nNgid:\t0\nPid:\t184\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t1024\nGroups:\t0 \nNStgid:\t184\nNSpid:\t184\nNSpgid:\t184\nNSsid:\t184\nVmPeak:\t 9187576 kB\nVmSize:\t 8788308 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 176412 kB\nVmRSS:\t 93728 kB\nRssAnon:\t 63700 kB\nRssFile:\t 30028 kB\nRssShmem:\t 0 kB\nVmData:\t 730108 kB\nVmStk:\t 132 kB\nVmExe:\t 18236 kB\nVmLib:\t 1524 kB\nVmPTE:\t 1296 kB\nVmSwap:\t 1332 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t65\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t126\nnonvoluntary_ctxt_switches:\t1\n", "Name:\tkubelet\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t312\nNgid:\t0\nPid:\t312\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t312\nNSpid:\t312\nNSpgid:\t312\nNSsid:\t312\nVmPeak:\t 8464400 kB\nVmSize:\t 8398864 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 202948 kB\nVmRSS:\t 137384 kB\nRssAnon:\t 90552 kB\nRssFile:\t 46832 kB\nRssShmem:\t 0 kB\nVmData:\t 1013280 kB\nVmStk:\t 132 kB\nVmExe:\t 36928 kB\nVmLib:\t 1560 kB\nVmPTE:\t 1420 kB\nVmSwap:\t 3284 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t95\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba3a00\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t254\nnonvoluntary_ctxt_switches:\t10\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3535941\nNgid:\t0\nPid:\t3535941\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3535941\nNSpid:\t3535941\nNSpgid:\t3535941\nNSsid:\t184\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11688 kB\nVmRSS:\t 11260 kB\nRssAnon:\t 4008 kB\nRssFile:\t 7252 kB\nRssShmem:\t 0 kB\nVmData:\t 45112 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 112 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t35\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3535966\nNgid:\t0\nPid:\t3535966\nPPid:\t3535941\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t3535966\nNSpid:\t3535966\nNSpgid:\t3535966\nNSsid:\t3535966\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 32 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t33\nnonvoluntary_ctxt_switches:\t7\n", "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3535993\nNgid:\t0\nPid:\t3535993\nPPid:\t3535941\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3535993\nNSpid:\t3535993\nNSpgid:\t3535993\nNSsid:\t3535993\nVmPeak:\t 2488 kB\nVmSize:\t 2488 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 928 kB\nVmRSS:\t 928 kB\nRssAnon:\t 88 kB\nRssFile:\t 840 kB\nRssShmem:\t 0 kB\nVmData:\t 224 kB\nVmStk:\t 132 kB\nVmExe:\t 20 kB\nVmLib:\t 1524 kB\nVmPTE:\t 52 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t48\nnonvoluntary_ctxt_switches:\t9\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536069\nNgid:\t0\nPid:\t3536069\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3536069\nNSpid:\t3536069\nNSpgid:\t3536069\nNSsid:\t184\nVmPeak:\t 1234060 kB\nVmSize:\t 1234060 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10732 kB\nVmRSS:\t 10368 kB\nRssAnon:\t 3492 kB\nRssFile:\t 6876 kB\nRssShmem:\t 0 kB\nVmData:\t 45368 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 116 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t10\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536096\nNgid:\t0\nPid:\t3536096\nPPid:\t3536069\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t3536096\t1\nNSpid:\t3536096\t1\nNSpgid:\t3536096\t1\nNSsid:\t3536096\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t523\nnonvoluntary_ctxt_switches:\t12\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536721\nNgid:\t0\nPid:\t3536721\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3536721\nNSpid:\t3536721\nNSpgid:\t3536721\nNSsid:\t184\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11104 kB\nVmRSS:\t 10708 kB\nRssAnon:\t 3392 kB\nRssFile:\t 7316 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 104 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t11\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t7\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536747\nNgid:\t0\nPid:\t3536747\nPPid:\t3536721\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t3536747\t1\nNSpid:\t3536747\t1\nNSpgid:\t3536747\t1\nNSsid:\t3536747\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t25\nnonvoluntary_ctxt_switches:\t7\n", "Name:\tchaos-operator\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536773\nNgid:\t0\nPid:\t3536773\nPPid:\t3536721\nTracerPid:\t0\nUid:\t1000\t1000\t1000\t1000\nGid:\t1000\t1000\t1000\t1000\nFDSize:\t64\nGroups:\t1000 \nNStgid:\t3536773\t1\nNSpid:\t3536773\t1\nNSpgid:\t3536773\t1\nNSsid:\t3536773\t1\nVmPeak:\t 1261676 kB\nVmSize:\t 1261676 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 37628 kB\nVmRSS:\t 37628 kB\nRssAnon:\t 15952 kB\nRssFile:\t 21676 kB\nRssShmem:\t 0 kB\nVmData:\t 66500 kB\nVmStk:\t 132 kB\nVmExe:\t 15232 kB\nVmLib:\t 8 kB\nVmPTE:\t 200 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t34\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t327\nnonvoluntary_ctxt_switches:\t8\n", "Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3538155\nNgid:\t0\nPid:\t3538155\nPPid:\t3536069\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3538155\t1\nNSpid:\t3538155\t1\nNSpgid:\t3538155\t1\nNSsid:\t3538155\t1\nVmPeak:\t 747724 kB\nVmSize:\t 747724 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 39544 kB\nVmRSS:\t 39544 kB\nRssAnon:\t 10216 kB\nRssFile:\t 29328 kB\nRssShmem:\t 0 kB\nVmData:\t 107912 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 192 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t16\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t505\nnonvoluntary_ctxt_switches:\t18\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t392\nNgid:\t0\nPid:\t392\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t392\nNSpid:\t392\nNSpgid:\t392\nNSsid:\t184\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10668 kB\nVmRSS:\t 10132 kB\nRssAnon:\t 3208 kB\nRssFile:\t 6924 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 56 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t8\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t410\nNgid:\t0\nPid:\t410\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t410\nNSpid:\t410\nNSpgid:\t410\nNSsid:\t184\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10188 kB\nVmRSS:\t 9504 kB\nRssAnon:\t 2720 kB\nRssFile:\t 6784 kB\nRssShmem:\t 0 kB\nVmData:\t 41016 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 584 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t8\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t447\nNgid:\t0\nPid:\t447\nPPid:\t392\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t447\t1\nNSpid:\t447\t1\nNSpgid:\t447\t1\nNSsid:\t447\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t161\nnonvoluntary_ctxt_switches:\t10\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t456\nNgid:\t0\nPid:\t456\nPPid:\t410\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t456\t1\nNSpid:\t456\t1\nNSpgid:\t456\t1\nNSsid:\t456\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t23\nnonvoluntary_ctxt_switches:\t7\n", "Name:\tkube-proxy\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t506\nNgid:\t35458\nPid:\t506\nPPid:\t410\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t506\t1\nNSpid:\t506\t1\nNSpgid:\t506\t1\nNSsid:\t506\t1\nVmPeak:\t 1304656 kB\nVmSize:\t 1304656 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 70872 kB\nVmRSS:\t 34504 kB\nRssAnon:\t 18096 kB\nRssFile:\t 16408 kB\nRssShmem:\t 0 kB\nVmData:\t 76244 kB\nVmStk:\t 132 kB\nVmExe:\t 31876 kB\nVmLib:\t 8 kB\nVmPTE:\t 288 kB\nVmSwap:\t 468 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t36\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t257089\nnonvoluntary_ctxt_switches:\t1225\n", "Name:\tkindnetd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t606\nNgid:\t35835\nPid:\t606\nPPid:\t392\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t606\t1\nNSpid:\t606\t1\nNSpgid:\t606\t1\nNSsid:\t606\t1\nVmPeak:\t 1284936 kB\nVmSize:\t 1284936 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 52100 kB\nVmRSS:\t 31756 kB\nRssAnon:\t 14008 kB\nRssFile:\t 17748 kB\nRssShmem:\t 0 kB\nVmData:\t 67472 kB\nVmStk:\t 132 kB\nVmExe:\t 25108 kB\nVmLib:\t 8 kB\nVmPTE:\t 256 kB\nVmSwap:\t 272 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t38\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80435fb\nCapEff:\t00000000a80435fb\nCapBnd:\t00000000a80435fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t309672\nnonvoluntary_ctxt_switches:\t10875\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t805\nNgid:\t0\nPid:\t805\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t805\nNSpid:\t805\nNSpgid:\t805\nNSsid:\t184\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11220 kB\nVmRSS:\t 10772 kB\nRssAnon:\t 3276 kB\nRssFile:\t 7496 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 112 kB\nVmSwap:\t 40 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t8\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t830\nNgid:\t0\nPid:\t830\nPPid:\t805\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t830\t1\nNSpid:\t830\t1\nNSpgid:\t830\t1\nNSsid:\t830\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t23\nnonvoluntary_ctxt_switches:\t7\n", "Name:\tsh\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t862\nNgid:\t0\nPid:\t862\nPPid:\t805\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t862\t1\nNSpid:\t862\t1\nNSpgid:\t862\t1\nNSsid:\t862\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1564 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 1036 kB\nVmRSS:\t 84 kB\nRssAnon:\t 80 kB\nRssFile:\t 4 kB\nRssShmem:\t 0 kB\nVmData:\t 52 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 44 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000004\nSigCgt:\t0000000000010002\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t905\nnonvoluntary_ctxt_switches:\t12\n"] [2025-09-22 02:52:23] INFO -- CNTI-proctree_by_pid: proctree_by_pid potential_parent_pid: 3538155 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: proc_statuses: ["Name:\tsystemd\nUmask:\t0000\nState:\tS (sleeping)\nTgid:\t1\nNgid:\t0\nPid:\t1\nPPid:\t0\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t1\nNSpid:\t1\nNSpgid:\t1\nNSsid:\t1\nVmPeak:\t 32596 kB\nVmSize:\t 31024 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 23796 kB\nVmRSS:\t 22280 kB\nRssAnon:\t 13496 kB\nRssFile:\t 8784 kB\nRssShmem:\t 0 kB\nVmData:\t 13016 kB\nVmStk:\t 132 kB\nVmExe:\t 40 kB\nVmLib:\t 10688 kB\nVmPTE:\t 92 kB\nVmSwap:\t 128 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t7fe3c0fe28014a03\nSigIgn:\t0000000000001000\nSigCgt:\t00000000000004ec\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t992320\nnonvoluntary_ctxt_switches:\t48660\n", "Name:\tsystemd-journal\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t162\nNgid:\t29234\nPid:\t162\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t162\nNSpid:\t162\nNSpgid:\t162\nNSsid:\t162\nVmPeak:\t 438860 kB\nVmSize:\t 380936 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 295792 kB\nVmRSS:\t 249896 kB\nRssAnon:\t 508 kB\nRssFile:\t 7120 kB\nRssShmem:\t 242268 kB\nVmData:\t 8964 kB\nVmStk:\t 132 kB\nVmExe:\t 92 kB\nVmLib:\t 9736 kB\nVmPTE:\t 660 kB\nVmSwap:\t 632 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000400004a02\nSigIgn:\t0000000000001000\nSigCgt:\t0000000100000040\nCapInh:\t0000000000000000\nCapPrm:\t00000025402800cf\nCapEff:\t00000025402800cf\nCapBnd:\t00000025402800cf\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t20\nSpeculation_Store_Bypass:\tthread force mitigated\nSpeculationIndirectBranch:\tconditional force disabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t686204\nnonvoluntary_ctxt_switches:\t1978\n", "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t1756\nNgid:\t0\nPid:\t1756\nPPid:\t862\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t1756\t888\nNSpid:\t1756\t888\nNSpgid:\t862\t1\nNSsid:\t862\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1532 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 20 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 40 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t1\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tcontainerd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t184\nNgid:\t0\nPid:\t184\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t1024\nGroups:\t0 \nNStgid:\t184\nNSpid:\t184\nNSpgid:\t184\nNSsid:\t184\nVmPeak:\t 9187576 kB\nVmSize:\t 8788308 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 176412 kB\nVmRSS:\t 93728 kB\nRssAnon:\t 63700 kB\nRssFile:\t 30028 kB\nRssShmem:\t 0 kB\nVmData:\t 730108 kB\nVmStk:\t 132 kB\nVmExe:\t 18236 kB\nVmLib:\t 1524 kB\nVmPTE:\t 1296 kB\nVmSwap:\t 1332 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t65\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t126\nnonvoluntary_ctxt_switches:\t1\n", "Name:\tkubelet\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t312\nNgid:\t0\nPid:\t312\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t312\nNSpid:\t312\nNSpgid:\t312\nNSsid:\t312\nVmPeak:\t 8464400 kB\nVmSize:\t 8398864 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 202948 kB\nVmRSS:\t 137384 kB\nRssAnon:\t 90552 kB\nRssFile:\t 46832 kB\nRssShmem:\t 0 kB\nVmData:\t 1013280 kB\nVmStk:\t 132 kB\nVmExe:\t 36928 kB\nVmLib:\t 1560 kB\nVmPTE:\t 1420 kB\nVmSwap:\t 3284 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t95\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba3a00\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t254\nnonvoluntary_ctxt_switches:\t10\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3535941\nNgid:\t0\nPid:\t3535941\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3535941\nNSpid:\t3535941\nNSpgid:\t3535941\nNSsid:\t184\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11688 kB\nVmRSS:\t 11260 kB\nRssAnon:\t 4008 kB\nRssFile:\t 7252 kB\nRssShmem:\t 0 kB\nVmData:\t 45112 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 112 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t35\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3535966\nNgid:\t0\nPid:\t3535966\nPPid:\t3535941\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t3535966\nNSpid:\t3535966\nNSpgid:\t3535966\nNSsid:\t3535966\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 32 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t33\nnonvoluntary_ctxt_switches:\t7\n", "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3535993\nNgid:\t0\nPid:\t3535993\nPPid:\t3535941\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3535993\nNSpid:\t3535993\nNSpgid:\t3535993\nNSsid:\t3535993\nVmPeak:\t 2488 kB\nVmSize:\t 2488 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 928 kB\nVmRSS:\t 928 kB\nRssAnon:\t 88 kB\nRssFile:\t 840 kB\nRssShmem:\t 0 kB\nVmData:\t 224 kB\nVmStk:\t 132 kB\nVmExe:\t 20 kB\nVmLib:\t 1524 kB\nVmPTE:\t 52 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t48\nnonvoluntary_ctxt_switches:\t9\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536069\nNgid:\t0\nPid:\t3536069\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3536069\nNSpid:\t3536069\nNSpgid:\t3536069\nNSsid:\t184\nVmPeak:\t 1234060 kB\nVmSize:\t 1234060 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10732 kB\nVmRSS:\t 10368 kB\nRssAnon:\t 3492 kB\nRssFile:\t 6876 kB\nRssShmem:\t 0 kB\nVmData:\t 45368 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 116 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t10\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536096\nNgid:\t0\nPid:\t3536096\nPPid:\t3536069\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t3536096\t1\nNSpid:\t3536096\t1\nNSpgid:\t3536096\t1\nNSsid:\t3536096\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t523\nnonvoluntary_ctxt_switches:\t12\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536721\nNgid:\t0\nPid:\t3536721\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3536721\nNSpid:\t3536721\nNSpgid:\t3536721\nNSsid:\t184\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11104 kB\nVmRSS:\t 10708 kB\nRssAnon:\t 3392 kB\nRssFile:\t 7316 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 104 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t11\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t7\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536747\nNgid:\t0\nPid:\t3536747\nPPid:\t3536721\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t3536747\t1\nNSpid:\t3536747\t1\nNSpgid:\t3536747\t1\nNSsid:\t3536747\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t25\nnonvoluntary_ctxt_switches:\t7\n", "Name:\tchaos-operator\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3536773\nNgid:\t0\nPid:\t3536773\nPPid:\t3536721\nTracerPid:\t0\nUid:\t1000\t1000\t1000\t1000\nGid:\t1000\t1000\t1000\t1000\nFDSize:\t64\nGroups:\t1000 \nNStgid:\t3536773\t1\nNSpid:\t3536773\t1\nNSpgid:\t3536773\t1\nNSsid:\t3536773\t1\nVmPeak:\t 1261676 kB\nVmSize:\t 1261676 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 37628 kB\nVmRSS:\t 37628 kB\nRssAnon:\t 15952 kB\nRssFile:\t 21676 kB\nRssShmem:\t 0 kB\nVmData:\t 66500 kB\nVmStk:\t 132 kB\nVmExe:\t 15232 kB\nVmLib:\t 8 kB\nVmPTE:\t 200 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t34\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t327\nnonvoluntary_ctxt_switches:\t8\n", "Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t3538155\nNgid:\t0\nPid:\t3538155\nPPid:\t3536069\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t3538155\t1\nNSpid:\t3538155\t1\nNSpgid:\t3538155\t1\nNSsid:\t3538155\t1\nVmPeak:\t 747724 kB\nVmSize:\t 747724 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 39544 kB\nVmRSS:\t 39544 kB\nRssAnon:\t 10216 kB\nRssFile:\t 29328 kB\nRssShmem:\t 0 kB\nVmData:\t 107912 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 192 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t16\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t505\nnonvoluntary_ctxt_switches:\t18\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t392\nNgid:\t0\nPid:\t392\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t392\nNSpid:\t392\nNSpgid:\t392\nNSsid:\t184\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10668 kB\nVmRSS:\t 10132 kB\nRssAnon:\t 3208 kB\nRssFile:\t 6924 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 56 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t8\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t410\nNgid:\t0\nPid:\t410\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t410\nNSpid:\t410\nNSpgid:\t410\nNSsid:\t184\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10188 kB\nVmRSS:\t 9504 kB\nRssAnon:\t 2720 kB\nRssFile:\t 6784 kB\nRssShmem:\t 0 kB\nVmData:\t 41016 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 584 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t8\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t447\nNgid:\t0\nPid:\t447\nPPid:\t392\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t447\t1\nNSpid:\t447\t1\nNSpgid:\t447\t1\nNSsid:\t447\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t161\nnonvoluntary_ctxt_switches:\t10\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t456\nNgid:\t0\nPid:\t456\nPPid:\t410\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t456\t1\nNSpid:\t456\t1\nNSpgid:\t456\t1\nNSsid:\t456\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t23\nnonvoluntary_ctxt_switches:\t7\n", "Name:\tkube-proxy\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t506\nNgid:\t35458\nPid:\t506\nPPid:\t410\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t506\t1\nNSpid:\t506\t1\nNSpgid:\t506\t1\nNSsid:\t506\t1\nVmPeak:\t 1304656 kB\nVmSize:\t 1304656 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 70872 kB\nVmRSS:\t 34504 kB\nRssAnon:\t 18096 kB\nRssFile:\t 16408 kB\nRssShmem:\t 0 kB\nVmData:\t 76244 kB\nVmStk:\t 132 kB\nVmExe:\t 31876 kB\nVmLib:\t 8 kB\nVmPTE:\t 288 kB\nVmSwap:\t 468 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t36\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t257089\nnonvoluntary_ctxt_switches:\t1225\n", "Name:\tkindnetd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t606\nNgid:\t35835\nPid:\t606\nPPid:\t392\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t606\t1\nNSpid:\t606\t1\nNSpgid:\t606\t1\nNSsid:\t606\t1\nVmPeak:\t 1284936 kB\nVmSize:\t 1284936 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 52100 kB\nVmRSS:\t 31756 kB\nRssAnon:\t 14008 kB\nRssFile:\t 17748 kB\nRssShmem:\t 0 kB\nVmData:\t 67472 kB\nVmStk:\t 132 kB\nVmExe:\t 25108 kB\nVmLib:\t 8 kB\nVmPTE:\t 256 kB\nVmSwap:\t 272 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t38\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80435fb\nCapEff:\t00000000a80435fb\nCapBnd:\t00000000a80435fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t309672\nnonvoluntary_ctxt_switches:\t10875\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t805\nNgid:\t0\nPid:\t805\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t805\nNSpid:\t805\nNSpgid:\t805\nNSsid:\t184\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11220 kB\nVmRSS:\t 10772 kB\nRssAnon:\t 3276 kB\nRssFile:\t 7496 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 112 kB\nVmSwap:\t 40 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t3/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t8\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t830\nNgid:\t0\nPid:\t830\nPPid:\t805\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t830\t1\nNSpid:\t830\t1\nNSpgid:\t830\t1\nNSsid:\t830\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t23\nnonvoluntary_ctxt_switches:\t7\n", "Name:\tsh\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t862\nNgid:\t0\nPid:\t862\nPPid:\t805\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t862\t1\nNSpid:\t862\t1\nNSpgid:\t862\t1\nNSsid:\t862\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1564 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 1036 kB\nVmRSS:\t 84 kB\nRssAnon:\t 80 kB\nRssFile:\t 4 kB\nRssShmem:\t 0 kB\nVmData:\t 52 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 44 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256612\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000004\nSigCgt:\t0000000000010002\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t905\nnonvoluntary_ctxt_switches:\t12\n"] [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: systemd Umask: 0000 State: S (sleeping) Tgid: 1 Ngid: 0 Pid: 1 PPid: 0 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 256 Groups: 0 NStgid: 1 NSpid: 1 NSpgid: 1 NSsid: 1 VmPeak: 32596 kB VmSize: 31024 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 23796 kB VmRSS: 22280 kB RssAnon: 13496 kB RssFile: 8784 kB RssShmem: 0 kB VmData: 13016 kB VmStk: 132 kB VmExe: 40 kB VmLib: 10688 kB VmPTE: 92 kB VmSwap: 128 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 3/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 7fe3c0fe28014a03 SigIgn: 0000000000001000 SigCgt: 00000000000004ec CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 992320 nonvoluntary_ctxt_switches: 48660 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "systemd", "Umask" => "0000", "State" => "S (sleeping)", "Tgid" => "1", "Ngid" => "0", "Pid" => "1", "PPid" => "0", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "256", "Groups" => "0", "NStgid" => "1", "NSpid" => "1", "NSpgid" => "1", "NSsid" => "1", "VmPeak" => "32596 kB", "VmSize" => "31024 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "23796 kB", "VmRSS" => "22280 kB", "RssAnon" => "13496 kB", "RssFile" => "8784 kB", "RssShmem" => "0 kB", "VmData" => "13016 kB", "VmStk" => "132 kB", "VmExe" => "40 kB", "VmLib" => "10688 kB", "VmPTE" => "92 kB", "VmSwap" => "128 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "7fe3c0fe28014a03", "SigIgn" => "0000000000001000", "SigCgt" => "00000000000004ec", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "992320", "nonvoluntary_ctxt_switches" => "48660"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: systemd-journal Umask: 0022 State: S (sleeping) Tgid: 162 Ngid: 29234 Pid: 162 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 162 NSpid: 162 NSpgid: 162 NSsid: 162 VmPeak: 438860 kB VmSize: 380936 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 295792 kB VmRSS: 249896 kB RssAnon: 508 kB RssFile: 7120 kB RssShmem: 242268 kB VmData: 8964 kB VmStk: 132 kB VmExe: 92 kB VmLib: 9736 kB VmPTE: 660 kB VmSwap: 632 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 3/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000400004a02 SigIgn: 0000000000001000 SigCgt: 0000000100000040 CapInh: 0000000000000000 CapPrm: 00000025402800cf CapEff: 00000025402800cf CapBnd: 00000025402800cf CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 2 Seccomp_filters: 20 Speculation_Store_Bypass: thread force mitigated SpeculationIndirectBranch: conditional force disabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 686204 nonvoluntary_ctxt_switches: 1978 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "systemd-journal", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "162", "Ngid" => "29234", "Pid" => "162", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "162", "NSpid" => "162", "NSpgid" => "162", "NSsid" => "162", "VmPeak" => "438860 kB", "VmSize" => "380936 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "295792 kB", "VmRSS" => "249896 kB", "RssAnon" => "508 kB", "RssFile" => "7120 kB", "RssShmem" => "242268 kB", "VmData" => "8964 kB", "VmStk" => "132 kB", "VmExe" => "92 kB", "VmLib" => "9736 kB", "VmPTE" => "660 kB", "VmSwap" => "632 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000400004a02", "SigIgn" => "0000000000001000", "SigCgt" => "0000000100000040", "CapInh" => "0000000000000000", "CapPrm" => "00000025402800cf", "CapEff" => "00000025402800cf", "CapBnd" => "00000025402800cf", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "2", "Seccomp_filters" => "20", "Speculation_Store_Bypass" => "thread force mitigated", "SpeculationIndirectBranch" => "conditional force disabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "686204", "nonvoluntary_ctxt_switches" => "1978"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: sleep Umask: 0022 State: S (sleeping) Tgid: 1756 Ngid: 0 Pid: 1756 PPid: 862 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 1 2 3 4 6 10 11 20 26 27 NStgid: 1756 888 NSpid: 1756 888 NSpgid: 862 1 NSsid: 862 1 VmPeak: 3552 kB VmSize: 1532 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 20 kB VmStk: 132 kB VmExe: 788 kB VmLib: 556 kB VmPTE: 40 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 3/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000000000 CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 1 nonvoluntary_ctxt_switches: 0 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "sleep", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "1756", "Ngid" => "0", "Pid" => "1756", "PPid" => "862", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0 1 2 3 4 6 10 11 20 26 27", "NStgid" => "1756\t888", "NSpid" => "1756\t888", "NSpgid" => "862\t1", "NSsid" => "862\t1", "VmPeak" => "3552 kB", "VmSize" => "1532 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "20 kB", "VmStk" => "132 kB", "VmExe" => "788 kB", "VmLib" => "556 kB", "VmPTE" => "40 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "1", "nonvoluntary_ctxt_switches" => "0"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: containerd Umask: 0022 State: S (sleeping) Tgid: 184 Ngid: 0 Pid: 184 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 1024 Groups: 0 NStgid: 184 NSpid: 184 NSpgid: 184 NSsid: 184 VmPeak: 9187576 kB VmSize: 8788308 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 176412 kB VmRSS: 93728 kB RssAnon: 63700 kB RssFile: 30028 kB RssShmem: 0 kB VmData: 730108 kB VmStk: 132 kB VmExe: 18236 kB VmLib: 1524 kB VmPTE: 1296 kB VmSwap: 1332 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 65 SigQ: 3/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 126 nonvoluntary_ctxt_switches: 1 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "184", "Ngid" => "0", "Pid" => "184", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "1024", "Groups" => "0", "NStgid" => "184", "NSpid" => "184", "NSpgid" => "184", "NSsid" => "184", "VmPeak" => "9187576 kB", "VmSize" => "8788308 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "176412 kB", "VmRSS" => "93728 kB", "RssAnon" => "63700 kB", "RssFile" => "30028 kB", "RssShmem" => "0 kB", "VmData" => "730108 kB", "VmStk" => "132 kB", "VmExe" => "18236 kB", "VmLib" => "1524 kB", "VmPTE" => "1296 kB", "VmSwap" => "1332 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "65", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "126", "nonvoluntary_ctxt_switches" => "1"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: kubelet Umask: 0022 State: S (sleeping) Tgid: 312 Ngid: 0 Pid: 312 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 256 Groups: 0 NStgid: 312 NSpid: 312 NSpgid: 312 NSsid: 312 VmPeak: 8464400 kB VmSize: 8398864 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 202948 kB VmRSS: 137384 kB RssAnon: 90552 kB RssFile: 46832 kB RssShmem: 0 kB VmData: 1013280 kB VmStk: 132 kB VmExe: 36928 kB VmLib: 1560 kB VmPTE: 1420 kB VmSwap: 3284 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 95 SigQ: 3/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba3a00 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 254 nonvoluntary_ctxt_switches: 10 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "kubelet", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "312", "Ngid" => "0", "Pid" => "312", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "256", "Groups" => "0", "NStgid" => "312", "NSpid" => "312", "NSpgid" => "312", "NSsid" => "312", "VmPeak" => "8464400 kB", "VmSize" => "8398864 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "202948 kB", "VmRSS" => "137384 kB", "RssAnon" => "90552 kB", "RssFile" => "46832 kB", "RssShmem" => "0 kB", "VmData" => "1013280 kB", "VmStk" => "132 kB", "VmExe" => "36928 kB", "VmLib" => "1560 kB", "VmPTE" => "1420 kB", "VmSwap" => "3284 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "95", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba3a00", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "254", "nonvoluntary_ctxt_switches" => "10"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 3535941 Ngid: 0 Pid: 3535941 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 3535941 NSpid: 3535941 NSpgid: 3535941 NSsid: 184 VmPeak: 1233804 kB VmSize: 1233804 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 11688 kB VmRSS: 11260 kB RssAnon: 4008 kB RssFile: 7252 kB RssShmem: 0 kB VmData: 45112 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 112 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 12 SigQ: 3/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 35 nonvoluntary_ctxt_switches: 0 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "3535941", "Ngid" => "0", "Pid" => "3535941", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "3535941", "NSpid" => "3535941", "NSpgid" => "3535941", "NSsid" => "184", "VmPeak" => "1233804 kB", "VmSize" => "1233804 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "11688 kB", "VmRSS" => "11260 kB", "RssAnon" => "4008 kB", "RssFile" => "7252 kB", "RssShmem" => "0 kB", "VmData" => "45112 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "112 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "12", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "35", "nonvoluntary_ctxt_switches" => "0"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 3535966 Ngid: 0 Pid: 3535966 PPid: 3535941 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 3535966 NSpid: 3535966 NSpgid: 3535966 NSsid: 3535966 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 32 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 33 nonvoluntary_ctxt_switches: 7 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "3535966", "Ngid" => "0", "Pid" => "3535966", "PPid" => "3535941", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "3535966", "NSpid" => "3535966", "NSpgid" => "3535966", "NSsid" => "3535966", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "32 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "33", "nonvoluntary_ctxt_switches" => "7"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: sleep Umask: 0022 State: S (sleeping) Tgid: 3535993 Ngid: 0 Pid: 3535993 PPid: 3535941 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 3535993 NSpid: 3535993 NSpgid: 3535993 NSsid: 3535993 VmPeak: 2488 kB VmSize: 2488 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 928 kB VmRSS: 928 kB RssAnon: 88 kB RssFile: 840 kB RssShmem: 0 kB VmData: 224 kB VmStk: 132 kB VmExe: 20 kB VmLib: 1524 kB VmPTE: 52 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 3/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000000000 CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 48 nonvoluntary_ctxt_switches: 9 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "sleep", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "3535993", "Ngid" => "0", "Pid" => "3535993", "PPid" => "3535941", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "3535993", "NSpid" => "3535993", "NSpgid" => "3535993", "NSsid" => "3535993", "VmPeak" => "2488 kB", "VmSize" => "2488 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "928 kB", "VmRSS" => "928 kB", "RssAnon" => "88 kB", "RssFile" => "840 kB", "RssShmem" => "0 kB", "VmData" => "224 kB", "VmStk" => "132 kB", "VmExe" => "20 kB", "VmLib" => "1524 kB", "VmPTE" => "52 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "48", "nonvoluntary_ctxt_switches" => "9"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 3536069 Ngid: 0 Pid: 3536069 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 3536069 NSpid: 3536069 NSpgid: 3536069 NSsid: 184 VmPeak: 1234060 kB VmSize: 1234060 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 10732 kB VmRSS: 10368 kB RssAnon: 3492 kB RssFile: 6876 kB RssShmem: 0 kB VmData: 45368 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 116 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 12 SigQ: 3/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 10 nonvoluntary_ctxt_switches: 0 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "3536069", "Ngid" => "0", "Pid" => "3536069", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "3536069", "NSpid" => "3536069", "NSpgid" => "3536069", "NSsid" => "184", "VmPeak" => "1234060 kB", "VmSize" => "1234060 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "10732 kB", "VmRSS" => "10368 kB", "RssAnon" => "3492 kB", "RssFile" => "6876 kB", "RssShmem" => "0 kB", "VmData" => "45368 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "116 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "12", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "10", "nonvoluntary_ctxt_switches" => "0"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 3536096 Ngid: 0 Pid: 3536096 PPid: 3536069 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 3536096 1 NSpid: 3536096 1 NSpgid: 3536096 1 NSsid: 3536096 1 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 2 Seccomp_filters: 1 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 523 nonvoluntary_ctxt_switches: 12 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "3536096", "Ngid" => "0", "Pid" => "3536096", "PPid" => "3536069", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "3536096\t1", "NSpid" => "3536096\t1", "NSpgid" => "3536096\t1", "NSsid" => "3536096\t1", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "2", "Seccomp_filters" => "1", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "523", "nonvoluntary_ctxt_switches" => "12"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 3536721 Ngid: 0 Pid: 3536721 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 3536721 NSpid: 3536721 NSpgid: 3536721 NSsid: 184 VmPeak: 1233548 kB VmSize: 1233548 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 11104 kB VmRSS: 10708 kB RssAnon: 3392 kB RssFile: 7316 kB RssShmem: 0 kB VmData: 40760 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 104 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 11 SigQ: 3/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 7 nonvoluntary_ctxt_switches: 0 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "3536721", "Ngid" => "0", "Pid" => "3536721", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "3536721", "NSpid" => "3536721", "NSpgid" => "3536721", "NSsid" => "184", "VmPeak" => "1233548 kB", "VmSize" => "1233548 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "11104 kB", "VmRSS" => "10708 kB", "RssAnon" => "3392 kB", "RssFile" => "7316 kB", "RssShmem" => "0 kB", "VmData" => "40760 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "104 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "11", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "7", "nonvoluntary_ctxt_switches" => "0"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 3536747 Ngid: 0 Pid: 3536747 PPid: 3536721 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 3536747 1 NSpid: 3536747 1 NSpgid: 3536747 1 NSsid: 3536747 1 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 2 Seccomp_filters: 1 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 25 nonvoluntary_ctxt_switches: 7 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "3536747", "Ngid" => "0", "Pid" => "3536747", "PPid" => "3536721", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "3536747\t1", "NSpid" => "3536747\t1", "NSpgid" => "3536747\t1", "NSsid" => "3536747\t1", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "2", "Seccomp_filters" => "1", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "25", "nonvoluntary_ctxt_switches" => "7"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: chaos-operator Umask: 0022 State: S (sleeping) Tgid: 3536773 Ngid: 0 Pid: 3536773 PPid: 3536721 TracerPid: 0 Uid: 1000 1000 1000 1000 Gid: 1000 1000 1000 1000 FDSize: 64 Groups: 1000 NStgid: 3536773 1 NSpid: 3536773 1 NSpgid: 3536773 1 NSsid: 3536773 1 VmPeak: 1261676 kB VmSize: 1261676 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 37628 kB VmRSS: 37628 kB RssAnon: 15952 kB RssFile: 21676 kB RssShmem: 0 kB VmData: 66500 kB VmStk: 132 kB VmExe: 15232 kB VmLib: 8 kB VmPTE: 200 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 34 SigQ: 0/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 327 nonvoluntary_ctxt_switches: 8 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "chaos-operator", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "3536773", "Ngid" => "0", "Pid" => "3536773", "PPid" => "3536721", "TracerPid" => "0", "Uid" => "1000\t1000\t1000\t1000", "Gid" => "1000\t1000\t1000\t1000", "FDSize" => "64", "Groups" => "1000", "NStgid" => "3536773\t1", "NSpid" => "3536773\t1", "NSpgid" => "3536773\t1", "NSsid" => "3536773\t1", "VmPeak" => "1261676 kB", "VmSize" => "1261676 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "37628 kB", "VmRSS" => "37628 kB", "RssAnon" => "15952 kB", "RssFile" => "21676 kB", "RssShmem" => "0 kB", "VmData" => "66500 kB", "VmStk" => "132 kB", "VmExe" => "15232 kB", "VmLib" => "8 kB", "VmPTE" => "200 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "34", "SigQ" => "0/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "327", "nonvoluntary_ctxt_switches" => "8"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: coredns Umask: 0022 State: S (sleeping) Tgid: 3538155 Ngid: 0 Pid: 3538155 PPid: 3536069 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 3538155 1 NSpid: 3538155 1 NSpgid: 3538155 1 NSsid: 3538155 1 VmPeak: 747724 kB VmSize: 747724 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 39544 kB VmRSS: 39544 kB RssAnon: 10216 kB RssFile: 29328 kB RssShmem: 0 kB VmData: 107912 kB VmStk: 132 kB VmExe: 22032 kB VmLib: 8 kB VmPTE: 192 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 16 SigQ: 3/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffe7fc1feff CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 505 nonvoluntary_ctxt_switches: 18 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "3538155", "Ngid" => "0", "Pid" => "3538155", "PPid" => "3536069", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "3538155\t1", "NSpid" => "3538155\t1", "NSpgid" => "3538155\t1", "NSsid" => "3538155\t1", "VmPeak" => "747724 kB", "VmSize" => "747724 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "39544 kB", "VmRSS" => "39544 kB", "RssAnon" => "10216 kB", "RssFile" => "29328 kB", "RssShmem" => "0 kB", "VmData" => "107912 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "192 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "16", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "505", "nonvoluntary_ctxt_switches" => "18"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] INFO -- CNTI: cmdline_by_pid [2025-09-22 02:52:23] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:23] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:23] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:23] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:23] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:23] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:23] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-09-22 02:52:23] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: current_pid == potential_parent_pid [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 392 Ngid: 0 Pid: 392 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 392 NSpid: 392 NSpgid: 392 NSsid: 184 VmPeak: 1233548 kB VmSize: 1233548 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 10668 kB VmRSS: 10132 kB RssAnon: 3208 kB RssFile: 6924 kB RssShmem: 0 kB VmData: 40760 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 108 kB VmSwap: 56 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 13 SigQ: 3/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 8 nonvoluntary_ctxt_switches: 0 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "392", "Ngid" => "0", "Pid" => "392", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "392", "NSpid" => "392", "NSpgid" => "392", "NSsid" => "184", "VmPeak" => "1233548 kB", "VmSize" => "1233548 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "10668 kB", "VmRSS" => "10132 kB", "RssAnon" => "3208 kB", "RssFile" => "6924 kB", "RssShmem" => "0 kB", "VmData" => "40760 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "108 kB", "VmSwap" => "56 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "13", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "8", "nonvoluntary_ctxt_switches" => "0"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 410 Ngid: 0 Pid: 410 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 410 NSpid: 410 NSpgid: 410 NSsid: 184 VmPeak: 1233804 kB VmSize: 1233804 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 10188 kB VmRSS: 9504 kB RssAnon: 2720 kB RssFile: 6784 kB RssShmem: 0 kB VmData: 41016 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 108 kB VmSwap: 584 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 13 SigQ: 3/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 8 nonvoluntary_ctxt_switches: 0 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "410", "Ngid" => "0", "Pid" => "410", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "410", "NSpid" => "410", "NSpgid" => "410", "NSsid" => "184", "VmPeak" => "1233804 kB", "VmSize" => "1233804 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "10188 kB", "VmRSS" => "9504 kB", "RssAnon" => "2720 kB", "RssFile" => "6784 kB", "RssShmem" => "0 kB", "VmData" => "41016 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "108 kB", "VmSwap" => "584 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "13", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "8", "nonvoluntary_ctxt_switches" => "0"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 447 Ngid: 0 Pid: 447 PPid: 392 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 447 1 NSpid: 447 1 NSpgid: 447 1 NSsid: 447 1 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 2 Seccomp_filters: 1 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 161 nonvoluntary_ctxt_switches: 10 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "447", "Ngid" => "0", "Pid" => "447", "PPid" => "392", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "447\t1", "NSpid" => "447\t1", "NSpgid" => "447\t1", "NSsid" => "447\t1", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "2", "Seccomp_filters" => "1", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "161", "nonvoluntary_ctxt_switches" => "10"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 456 Ngid: 0 Pid: 456 PPid: 410 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 456 1 NSpid: 456 1 NSpgid: 456 1 NSsid: 456 1 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 23 nonvoluntary_ctxt_switches: 7 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "456", "Ngid" => "0", "Pid" => "456", "PPid" => "410", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "456\t1", "NSpid" => "456\t1", "NSpgid" => "456\t1", "NSsid" => "456\t1", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "23", "nonvoluntary_ctxt_switches" => "7"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: kube-proxy Umask: 0022 State: S (sleeping) Tgid: 506 Ngid: 35458 Pid: 506 PPid: 410 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 506 1 NSpid: 506 1 NSpgid: 506 1 NSsid: 506 1 VmPeak: 1304656 kB VmSize: 1304656 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 70872 kB VmRSS: 34504 kB RssAnon: 18096 kB RssFile: 16408 kB RssShmem: 0 kB VmData: 76244 kB VmStk: 132 kB VmExe: 31876 kB VmLib: 8 kB VmPTE: 288 kB VmSwap: 468 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 36 SigQ: 3/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 257089 nonvoluntary_ctxt_switches: 1225 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "kube-proxy", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "506", "Ngid" => "35458", "Pid" => "506", "PPid" => "410", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "506\t1", "NSpid" => "506\t1", "NSpgid" => "506\t1", "NSsid" => "506\t1", "VmPeak" => "1304656 kB", "VmSize" => "1304656 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "70872 kB", "VmRSS" => "34504 kB", "RssAnon" => "18096 kB", "RssFile" => "16408 kB", "RssShmem" => "0 kB", "VmData" => "76244 kB", "VmStk" => "132 kB", "VmExe" => "31876 kB", "VmLib" => "8 kB", "VmPTE" => "288 kB", "VmSwap" => "468 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "36", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "257089", "nonvoluntary_ctxt_switches" => "1225"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: kindnetd Umask: 0022 State: S (sleeping) Tgid: 606 Ngid: 35835 Pid: 606 PPid: 392 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 606 1 NSpid: 606 1 NSpgid: 606 1 NSsid: 606 1 VmPeak: 1284936 kB VmSize: 1284936 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 52100 kB VmRSS: 31756 kB RssAnon: 14008 kB RssFile: 17748 kB RssShmem: 0 kB VmData: 67472 kB VmStk: 132 kB VmExe: 25108 kB VmLib: 8 kB VmPTE: 256 kB VmSwap: 272 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 38 SigQ: 3/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 00000000a80435fb CapEff: 00000000a80435fb CapBnd: 00000000a80435fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 309672 nonvoluntary_ctxt_switches: 10875 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "kindnetd", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "606", "Ngid" => "35835", "Pid" => "606", "PPid" => "392", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "606\t1", "NSpid" => "606\t1", "NSpgid" => "606\t1", "NSsid" => "606\t1", "VmPeak" => "1284936 kB", "VmSize" => "1284936 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "52100 kB", "VmRSS" => "31756 kB", "RssAnon" => "14008 kB", "RssFile" => "17748 kB", "RssShmem" => "0 kB", "VmData" => "67472 kB", "VmStk" => "132 kB", "VmExe" => "25108 kB", "VmLib" => "8 kB", "VmPTE" => "256 kB", "VmSwap" => "272 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "38", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80435fb", "CapEff" => "00000000a80435fb", "CapBnd" => "00000000a80435fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "309672", "nonvoluntary_ctxt_switches" => "10875"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 805 Ngid: 0 Pid: 805 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 805 NSpid: 805 NSpgid: 805 NSsid: 184 VmPeak: 1233548 kB VmSize: 1233548 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 11220 kB VmRSS: 10772 kB RssAnon: 3276 kB RssFile: 7496 kB RssShmem: 0 kB VmData: 40760 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 112 kB VmSwap: 40 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 13 SigQ: 3/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 8 nonvoluntary_ctxt_switches: 0 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "805", "Ngid" => "0", "Pid" => "805", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "805", "NSpid" => "805", "NSpgid" => "805", "NSsid" => "184", "VmPeak" => "1233548 kB", "VmSize" => "1233548 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "11220 kB", "VmRSS" => "10772 kB", "RssAnon" => "3276 kB", "RssFile" => "7496 kB", "RssShmem" => "0 kB", "VmData" => "40760 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "112 kB", "VmSwap" => "40 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "13", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "8", "nonvoluntary_ctxt_switches" => "0"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 830 Ngid: 0 Pid: 830 PPid: 805 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 830 1 NSpid: 830 1 NSpgid: 830 1 NSsid: 830 1 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 23 nonvoluntary_ctxt_switches: 7 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "830", "Ngid" => "0", "Pid" => "830", "PPid" => "805", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "830\t1", "NSpid" => "830\t1", "NSpgid" => "830\t1", "NSsid" => "830\t1", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "23", "nonvoluntary_ctxt_switches" => "7"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI: parse_status status_output: Name: sh Umask: 0022 State: S (sleeping) Tgid: 862 Ngid: 0 Pid: 862 PPid: 805 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 1 2 3 4 6 10 11 20 26 27 NStgid: 862 1 NSpid: 862 1 NSpgid: 862 1 NSsid: 862 1 VmPeak: 3552 kB VmSize: 1564 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 1036 kB VmRSS: 84 kB RssAnon: 80 kB RssFile: 4 kB RssShmem: 0 kB VmData: 52 kB VmStk: 132 kB VmExe: 788 kB VmLib: 556 kB VmPTE: 44 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 4/256612 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000004 SigCgt: 0000000000010002 CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 905 nonvoluntary_ctxt_switches: 12 [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "sh", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "862", "Ngid" => "0", "Pid" => "862", "PPid" => "805", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0 1 2 3 4 6 10 11 20 26 27", "NStgid" => "862\t1", "NSpid" => "862\t1", "NSpgid" => "862\t1", "NSsid" => "862\t1", "VmPeak" => "3552 kB", "VmSize" => "1564 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "1036 kB", "VmRSS" => "84 kB", "RssAnon" => "80 kB", "RssFile" => "4 kB", "RssShmem" => "0 kB", "VmData" => "52 kB", "VmStk" => "132 kB", "VmExe" => "788 kB", "VmLib" => "556 kB", "VmPTE" => "44 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "4/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000004", "SigCgt" => "0000000000010002", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "905", "nonvoluntary_ctxt_switches" => "12"} [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: proctree: [{"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "3538155", "Ngid" => "0", "Pid" => "3538155", "PPid" => "3536069", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "3538155\t1", "NSpid" => "3538155\t1", "NSpgid" => "3538155\t1", "NSsid" => "3538155\t1", "VmPeak" => "747724 kB", "VmSize" => "747724 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "39544 kB", "VmRSS" => "39544 kB", "RssAnon" => "10216 kB", "RssFile" => "29328 kB", "RssShmem" => "0 kB", "VmData" => "107912 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "192 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "16", "SigQ" => "3/256612", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "505", "nonvoluntary_ctxt_switches" => "18", "cmdline" => "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000"}] [2025-09-22 02:52:23] DEBUG -- CNTI-proctree_by_pid: [2025-09-22 02:52:23] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:23] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:23] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:23] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:23] INFO -- CNTI-KubectlClient.Utils.exec_bg: Exec background command in pod cluster-tools-mh8tg [2025-09-22 02:52:23] DEBUG -- CNTI: ClusterTools exec: {process: #), @wait_count=2, @channel=#>, output: "", error: ""} [2025-09-22 02:52:24] DEBUG -- CNTI: Time left: 9 seconds [2025-09-22 02:52:24] INFO -- CNTI-sig_term_handled: Attached strace to PIDs: 3538155 [2025-09-22 02:52:27] INFO -- CNTI: exec_by_node: Called with JSON [2025-09-22 02:52:27] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-09-22 02:52:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-09-22 02:52:28] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-mh8tg [2025-09-22 02:52:28] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-mh8tg [2025-09-22 02:52:28] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-mh8tg [2025-09-22 02:52:33] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} ✔️ 🏆PASSED: [sig_term_handled] Sig Term handled ⚖👀 Microservice results: 2 of 4 tests passed  Reliability, Resilience, and Availability Tests [2025-09-22 02:52:36] INFO -- CNTI-sig_term_handled: PID 3538155 => SIGTERM captured? true [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'sig_term_handled' emoji: ⚖👀 [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'sig_term_handled' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points: Task: 'sig_term_handled' type: essential [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'sig_term_handled' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points: Task: 'sig_term_handled' type: essential [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.upsert_task-sig_term_handled: Task start time: 2025-09-22 02:52:10 UTC, end time: 2025-09-22 02:52:36 UTC [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.upsert_task-sig_term_handled: Task: 'sig_term_handled' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:25.777825387 [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["reasonable_image_size", "specialized_init_system", "reasonable_startup_time", "single_process_type", "zombie_handled", "service_discovery", "shared_database", "sig_term_handled"] for tag: microservice [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled"] for tags: ["microservice", "cert"] [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 200, total tasks passed: 2 for tags: ["microservice", "cert"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["reasonable_image_size", "specialized_init_system", "reasonable_startup_time", "single_process_type", "zombie_handled", "service_discovery", "shared_database", "sig_term_handled"] for tag: microservice [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 400, max tasks passed: 4 for tags: ["microservice", "cert"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["reasonable_image_size", "specialized_init_system", "reasonable_startup_time", "single_process_type", "zombie_handled", "service_discovery", "shared_database", "sig_term_handled"] for tag: microservice [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled"] for tags: ["microservice", "cert"] [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 200, total tasks passed: 2 for tags: ["microservice", "cert"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["reasonable_image_size", "specialized_init_system", "reasonable_startup_time", "single_process_type", "zombie_handled", "service_discovery", "shared_database", "sig_term_handled"] for tag: microservice [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 400, max tasks passed: 4 for tags: ["microservice", "cert"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1300, total tasks passed: 13 for tags: ["essential"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-09-22 02:52:36] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:52:36] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:52:36] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:52:36] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 400} [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_delete", "pod_io_stress", "pod_memory_hog", "disk_fill", "pod_dns_error", "liveness", "readiness"] for tag: resilience [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:52:36] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:52:36] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:52:36] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:52:36] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:52:36] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [liveness] [2025-09-22 02:52:36] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Task.task_runner.liveness: Starting test [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.workload_resource_test: Starting test [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.resource_refs: Yielding resources: ["replicaset", "deployment", "statefulset", "pod", "daemonset"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.cnf_resources: Map block to CNF resources [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-09-22 02:52:36] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-09-22 02:52:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:52:36] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-09-22 02:52:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns ✔️ 🏆PASSED: [liveness] All workload resources have at least one container with a liveness probe ⎈🧫 [2025-09-22 02:52:36] INFO -- CNTI-liveness: Containers in Deployment/coredns-coredns missing livenessProbe: none [2025-09-22 02:52:36] INFO -- CNTI-liveness: Resource Deployment/coredns-coredns has at least one livenessProbe?: true [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'liveness' emoji: ⎈🧫 [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'liveness' tags: ["resilience", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points: Task: 'liveness' type: essential [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'liveness' tags: ["resilience", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points: Task: 'liveness' type: essential [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.upsert_task-liveness: Task start time: 2025-09-22 02:52:36 UTC, end time: 2025-09-22 02:52:36 UTC [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.upsert_task-liveness: Task: 'liveness' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.231000346 [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:52:36] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-09-22 02:52:36] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-09-22 02:52:36] INFO -- CNTI: check_cnf_config args: # [2025-09-22 02:52:36] INFO -- CNTI: check_cnf_config cnf: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-09-22 02:52:36] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [readiness] [2025-09-22 02:52:36] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Task.task_runner.readiness: Starting test [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.workload_resource_test: Starting test [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.resource_refs: Yielding resources: ["replicaset", "deployment", "statefulset", "pod", "daemonset"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.cnf_resources: Map block to CNF resources [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-09-22 02:52:36] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-09-22 02:52:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-09-22 02:52:36] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-09-22 02:52:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns ✔️ 🏆PASSED: [readiness] All workload resources have at least one container with a readiness probe ⎈🧫 Reliability, resilience, and availability results: 2 of 2 tests passed  RESULTS SUMMARY  - 15 of 18 total tests passed  - 15 of 18 essential tests passed Results have been saved to results/cnf-testsuite-results-20250922-024745-528.yml [2025-09-22 02:52:36] INFO -- CNTI-readiness: Containers in Deployment/coredns-coredns missing readinessProbe: none [2025-09-22 02:52:36] INFO -- CNTI-readiness: Resource Deployment/coredns-coredns has at least one readinessProbe?: true [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'readiness' emoji: ⎈🧫 [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'readiness' tags: ["resilience", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points: Task: 'readiness' type: essential [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'readiness' tags: ["resilience", "dynamic", "workload", "cert", "essential"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points: Task: 'readiness' type: essential [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.upsert_task-readiness: Task start time: 2025-09-22 02:52:36 UTC, end time: 2025-09-22 02:52:36 UTC [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.upsert_task-readiness: Task: 'readiness' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.262315293 [2025-09-22 02:52:36] DEBUG -- CNTI: resilience [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_delete", "pod_io_stress", "pod_memory_hog", "disk_fill", "pod_dns_error", "liveness", "readiness"] for tag: resilience [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["liveness", "readiness"] for tags: ["resilience", "cert"] [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 200, total tasks passed: 2 for tags: ["resilience", "cert"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_delete", "pod_io_stress", "pod_memory_hog", "disk_fill", "pod_dns_error", "liveness", "readiness"] for tag: resilience [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 200, max tasks passed: 2 for tags: ["resilience", "cert"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_delete", "pod_io_stress", "pod_memory_hog", "disk_fill", "pod_dns_error", "liveness", "readiness"] for tag: resilience [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["liveness", "readiness"] for tags: ["resilience", "cert"] [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 200, total tasks passed: 2 for tags: ["resilience", "cert"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_delete", "pod_io_stress", "pod_memory_hog", "disk_fill", "pod_dns_error", "liveness", "readiness"] for tag: resilience [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-09-22 02:52:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 200, max tasks passed: 2 for tags: ["resilience", "cert"] [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1500, total tasks passed: 15 for tags: ["essential"] [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-09-22 02:52:37] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:52:37] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:52:37] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-09-22 02:52:37] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 200} [2025-09-22 02:52:37] DEBUG -- CNTI: cert [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["cert"] [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1500, total tasks passed: 15 for tags: ["cert"] [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["cert"] [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["cert"] [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1500, total tasks passed: 15 for tags: ["cert"] [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["cert"] [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1500, total tasks passed: 15 for tags: ["essential"] [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-09-22 02:52:37] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-09-22 02:52:37] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-09-22 02:52:37] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 200} [2025-09-22 02:52:37] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 200} [2025-09-22 02:52:37] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 200} [2025-09-22 02:52:37] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800} [2025-09-22 02:52:37] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800} [2025-09-22 02:52:37] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800, "total_passed" => "15 of 18"} [2025-09-22 02:52:37] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800, "total_passed" => "15 of 18"} [2025-09-22 02:52:37] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800, "total_passed" => "15 of 18", "essential_passed" => "15 of 18"} [2025-09-22 02:52:37] INFO -- CNTI: results yaml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.5-beta2", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800, "total_passed" => "15 of 18", "essential_passed" => "15 of 18"} 2025-09-22 02:52:37,377 - functest_kubernetes.cnf_conformance.conformance - WARNING - non_root_containers failed 2025-09-22 02:52:37,377 - functest_kubernetes.cnf_conformance.conformance - WARNING - specialized_init_system failed 2025-09-22 02:52:37,377 - functest_kubernetes.cnf_conformance.conformance - WARNING - zombie_handled failed 2025-09-22 02:52:37,378 - functest_kubernetes.cnf_conformance.conformance - INFO - +-------------------------------------------------------------+----------------+ | NAME | STATUS | +-------------------------------------------------------------+----------------+ | increase_decrease_capacity | passed | | node_drain | passed | | privileged_containers | passed | | non_root_containers | failed | | cpu_limits | passed | | memory_limits | passed | | hostpath_mounts | passed | | container_sock_mounts | passed | | selinux_options | na | | hostport_not_used | passed | | hardcoded_ip_addresses_in_k8s_runtime_configuration | passed | | latest_tag | passed | | log_output | passed | | specialized_init_system | failed | | single_process_type | passed | | zombie_handled | failed | | sig_term_handled | passed | | liveness | passed | | readiness | passed | +-------------------------------------------------------------+----------------+ 2025-09-22 02:52:37,479 - xtesting.ci.run_tests - INFO - Test result: +-----------------------+------------------+------------------+----------------+ | TEST CASE | PROJECT | DURATION | RESULT | +-----------------------+------------------+------------------+----------------+ | cnf_testsuite | functest | 05:22 | PASS | +-----------------------+------------------+------------------+----------------+ 2025-09-22 02:52:39,950 - functest_kubernetes.cnf_conformance.conformance - INFO - cnf-testsuite cnf_uninstall cnf-config=example-cnfs/coredns/cnf-testsuite.yml Waiting deletion for "coredns" (1/5): [ConfigMap] coredns-coredns Waiting deletion for "coredns" (2/5): [ClusterRole] coredns-coredns Waiting deletion for "coredns" (3/5): [ClusterRoleBinding] coredns-coredns Waiting deletion for "coredns" (4/5): [Service] coredns-coredns Waiting deletion for "coredns" (5/5): [Deployment] coredns-coredns All "coredns" resources are gone. All CNF deployments were uninstalled. CNF uninstallation succeeded. 2025-09-22 02:52:53,175 - functest_kubernetes.cnf_conformance.conformance - INFO - cnf-testsuite uninstall_all cnf-config=example-cnfs/coredns/cnf-testsuite.yml CNF uninstallation skipped. No CNF config found in installed_cnf_files directory.  CNF uninstallation succeeded. Uninstalling testsuite helper tools. Testsuite helper tools uninstalled.